Mar 21 04:17:13 crc systemd[1]: Starting Kubernetes Kubelet... Mar 21 04:17:13 crc restorecon[4702]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 21 04:17:13 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 21 04:17:14 crc restorecon[4702]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 21 04:17:14 crc restorecon[4702]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Mar 21 04:17:15 crc kubenswrapper[4923]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 21 04:17:15 crc kubenswrapper[4923]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 21 04:17:15 crc kubenswrapper[4923]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 21 04:17:15 crc kubenswrapper[4923]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 21 04:17:15 crc kubenswrapper[4923]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 21 04:17:15 crc kubenswrapper[4923]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.883147 4923 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891712 4923 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891746 4923 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891757 4923 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891767 4923 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891776 4923 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891785 4923 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891792 4923 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891801 4923 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891809 4923 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891818 4923 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891826 4923 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891837 4923 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891849 4923 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891857 4923 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891866 4923 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891874 4923 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891883 4923 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891891 4923 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891899 4923 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891907 4923 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891915 4923 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891922 4923 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891930 4923 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891938 4923 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891946 4923 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891954 4923 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891962 4923 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891969 4923 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891977 4923 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891985 4923 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.891993 4923 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892000 4923 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892008 4923 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892016 4923 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892036 4923 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892045 4923 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892053 4923 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892060 4923 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892068 4923 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892078 4923 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892086 4923 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892093 4923 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892101 4923 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892109 4923 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892117 4923 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892127 4923 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892137 4923 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892146 4923 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892155 4923 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892163 4923 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892171 4923 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892179 4923 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892187 4923 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892197 4923 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892207 4923 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892216 4923 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892224 4923 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892232 4923 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892239 4923 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892247 4923 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892254 4923 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892263 4923 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892270 4923 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892278 4923 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892286 4923 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892293 4923 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892301 4923 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892310 4923 feature_gate.go:330] unrecognized feature gate: Example Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892344 4923 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892352 4923 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.892360 4923 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894100 4923 flags.go:64] FLAG: --address="0.0.0.0" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894125 4923 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894148 4923 flags.go:64] FLAG: --anonymous-auth="true" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894160 4923 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894172 4923 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894181 4923 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894193 4923 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894204 4923 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894213 4923 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894222 4923 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894232 4923 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894242 4923 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894255 4923 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894264 4923 flags.go:64] FLAG: --cgroup-root="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894273 4923 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894282 4923 flags.go:64] FLAG: --client-ca-file="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894291 4923 flags.go:64] FLAG: --cloud-config="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894299 4923 flags.go:64] FLAG: --cloud-provider="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894307 4923 flags.go:64] FLAG: --cluster-dns="[]" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894355 4923 flags.go:64] FLAG: --cluster-domain="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894364 4923 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894373 4923 flags.go:64] FLAG: --config-dir="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894382 4923 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894392 4923 flags.go:64] FLAG: --container-log-max-files="5" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894404 4923 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894413 4923 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894423 4923 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894432 4923 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894441 4923 flags.go:64] FLAG: --contention-profiling="false" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894450 4923 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894459 4923 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894468 4923 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894477 4923 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894488 4923 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894497 4923 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894506 4923 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894515 4923 flags.go:64] FLAG: --enable-load-reader="false" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894527 4923 flags.go:64] FLAG: --enable-server="true" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894535 4923 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894546 4923 flags.go:64] FLAG: --event-burst="100" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894555 4923 flags.go:64] FLAG: --event-qps="50" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894564 4923 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894574 4923 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894583 4923 flags.go:64] FLAG: --eviction-hard="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894611 4923 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894620 4923 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894629 4923 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894640 4923 flags.go:64] FLAG: --eviction-soft="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894650 4923 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894658 4923 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894667 4923 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894676 4923 flags.go:64] FLAG: --experimental-mounter-path="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894685 4923 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894694 4923 flags.go:64] FLAG: --fail-swap-on="true" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894703 4923 flags.go:64] FLAG: --feature-gates="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894713 4923 flags.go:64] FLAG: --file-check-frequency="20s" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894723 4923 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894732 4923 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894741 4923 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894751 4923 flags.go:64] FLAG: --healthz-port="10248" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894760 4923 flags.go:64] FLAG: --help="false" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894768 4923 flags.go:64] FLAG: --hostname-override="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894777 4923 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894786 4923 flags.go:64] FLAG: --http-check-frequency="20s" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894796 4923 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894805 4923 flags.go:64] FLAG: --image-credential-provider-config="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894813 4923 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894822 4923 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894831 4923 flags.go:64] FLAG: --image-service-endpoint="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894840 4923 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894849 4923 flags.go:64] FLAG: --kube-api-burst="100" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894857 4923 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894867 4923 flags.go:64] FLAG: --kube-api-qps="50" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894875 4923 flags.go:64] FLAG: --kube-reserved="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894885 4923 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894893 4923 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894905 4923 flags.go:64] FLAG: --kubelet-cgroups="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894914 4923 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894922 4923 flags.go:64] FLAG: --lock-file="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894931 4923 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894940 4923 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894949 4923 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894962 4923 flags.go:64] FLAG: --log-json-split-stream="false" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894972 4923 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894981 4923 flags.go:64] FLAG: --log-text-split-stream="false" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894990 4923 flags.go:64] FLAG: --logging-format="text" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.894999 4923 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895009 4923 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895017 4923 flags.go:64] FLAG: --manifest-url="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895026 4923 flags.go:64] FLAG: --manifest-url-header="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895038 4923 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895046 4923 flags.go:64] FLAG: --max-open-files="1000000" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895057 4923 flags.go:64] FLAG: --max-pods="110" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895066 4923 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895075 4923 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895084 4923 flags.go:64] FLAG: --memory-manager-policy="None" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895093 4923 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895102 4923 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895111 4923 flags.go:64] FLAG: --node-ip="192.168.126.11" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895120 4923 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895140 4923 flags.go:64] FLAG: --node-status-max-images="50" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895149 4923 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895159 4923 flags.go:64] FLAG: --oom-score-adj="-999" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895168 4923 flags.go:64] FLAG: --pod-cidr="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895177 4923 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895191 4923 flags.go:64] FLAG: --pod-manifest-path="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895200 4923 flags.go:64] FLAG: --pod-max-pids="-1" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895209 4923 flags.go:64] FLAG: --pods-per-core="0" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895218 4923 flags.go:64] FLAG: --port="10250" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895230 4923 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895239 4923 flags.go:64] FLAG: --provider-id="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895247 4923 flags.go:64] FLAG: --qos-reserved="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895256 4923 flags.go:64] FLAG: --read-only-port="10255" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895265 4923 flags.go:64] FLAG: --register-node="true" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895274 4923 flags.go:64] FLAG: --register-schedulable="true" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895283 4923 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895302 4923 flags.go:64] FLAG: --registry-burst="10" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895311 4923 flags.go:64] FLAG: --registry-qps="5" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895343 4923 flags.go:64] FLAG: --reserved-cpus="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895356 4923 flags.go:64] FLAG: --reserved-memory="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895370 4923 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895384 4923 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895396 4923 flags.go:64] FLAG: --rotate-certificates="false" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895408 4923 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895420 4923 flags.go:64] FLAG: --runonce="false" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895430 4923 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895441 4923 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895452 4923 flags.go:64] FLAG: --seccomp-default="false" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895462 4923 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895474 4923 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895487 4923 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895499 4923 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895510 4923 flags.go:64] FLAG: --storage-driver-password="root" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895521 4923 flags.go:64] FLAG: --storage-driver-secure="false" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895531 4923 flags.go:64] FLAG: --storage-driver-table="stats" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895543 4923 flags.go:64] FLAG: --storage-driver-user="root" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895554 4923 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895565 4923 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895576 4923 flags.go:64] FLAG: --system-cgroups="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895587 4923 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895605 4923 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895622 4923 flags.go:64] FLAG: --tls-cert-file="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895631 4923 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895643 4923 flags.go:64] FLAG: --tls-min-version="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895652 4923 flags.go:64] FLAG: --tls-private-key-file="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895661 4923 flags.go:64] FLAG: --topology-manager-policy="none" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895670 4923 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895679 4923 flags.go:64] FLAG: --topology-manager-scope="container" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895688 4923 flags.go:64] FLAG: --v="2" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895699 4923 flags.go:64] FLAG: --version="false" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895710 4923 flags.go:64] FLAG: --vmodule="" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895721 4923 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.895731 4923 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896028 4923 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896042 4923 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896067 4923 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896076 4923 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896085 4923 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896093 4923 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896101 4923 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896109 4923 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896117 4923 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896126 4923 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896133 4923 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896144 4923 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896153 4923 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896161 4923 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896231 4923 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896447 4923 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896545 4923 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896555 4923 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896563 4923 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896571 4923 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896583 4923 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896592 4923 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896600 4923 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896607 4923 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896615 4923 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896623 4923 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896634 4923 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896644 4923 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896653 4923 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896661 4923 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896669 4923 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896677 4923 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896685 4923 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896692 4923 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896700 4923 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896708 4923 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896716 4923 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896723 4923 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896746 4923 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896753 4923 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896762 4923 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896769 4923 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896777 4923 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896785 4923 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896792 4923 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896800 4923 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896808 4923 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896815 4923 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896823 4923 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896830 4923 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896838 4923 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896845 4923 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896856 4923 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896864 4923 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896871 4923 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896879 4923 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896887 4923 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896895 4923 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896902 4923 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896910 4923 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896918 4923 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896927 4923 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896934 4923 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896942 4923 feature_gate.go:330] unrecognized feature gate: Example Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896949 4923 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896957 4923 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896964 4923 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896972 4923 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896983 4923 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.896992 4923 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.897002 4923 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.897015 4923 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.911065 4923 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.911125 4923 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911305 4923 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911361 4923 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911375 4923 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911387 4923 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911398 4923 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911409 4923 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911419 4923 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911433 4923 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911452 4923 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911464 4923 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911475 4923 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911486 4923 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911500 4923 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911512 4923 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911523 4923 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911535 4923 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911545 4923 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911556 4923 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911566 4923 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911576 4923 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911586 4923 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911596 4923 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911605 4923 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911616 4923 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911625 4923 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911635 4923 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911645 4923 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911655 4923 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911665 4923 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911679 4923 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911689 4923 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911699 4923 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911709 4923 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911718 4923 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911728 4923 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911739 4923 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911750 4923 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911759 4923 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911770 4923 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911780 4923 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911790 4923 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911799 4923 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911810 4923 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911820 4923 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911834 4923 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911850 4923 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911861 4923 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911871 4923 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911883 4923 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911894 4923 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911904 4923 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911915 4923 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911928 4923 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911941 4923 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911955 4923 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911967 4923 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911978 4923 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911988 4923 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.911998 4923 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912008 4923 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912018 4923 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912027 4923 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912037 4923 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912047 4923 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912057 4923 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912071 4923 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912081 4923 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912092 4923 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912102 4923 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912112 4923 feature_gate.go:330] unrecognized feature gate: Example Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912123 4923 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.912139 4923 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912528 4923 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912551 4923 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912563 4923 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912574 4923 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912586 4923 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912599 4923 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912610 4923 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912621 4923 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912633 4923 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912643 4923 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912654 4923 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912666 4923 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912677 4923 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912688 4923 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912701 4923 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912714 4923 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912728 4923 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912742 4923 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912755 4923 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912769 4923 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912781 4923 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912792 4923 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912802 4923 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912812 4923 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912823 4923 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912834 4923 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912844 4923 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912854 4923 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912864 4923 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912877 4923 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912888 4923 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912897 4923 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912907 4923 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912917 4923 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912928 4923 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912938 4923 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912948 4923 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912958 4923 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912967 4923 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912977 4923 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912987 4923 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.912997 4923 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.913007 4923 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.913017 4923 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.913028 4923 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.913038 4923 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.913048 4923 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.913059 4923 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.913068 4923 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.913078 4923 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.913089 4923 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.913099 4923 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.913109 4923 feature_gate.go:330] unrecognized feature gate: Example Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.913119 4923 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.913129 4923 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.913139 4923 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.913152 4923 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.913167 4923 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.913179 4923 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.913191 4923 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.913203 4923 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.913214 4923 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.913225 4923 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.913235 4923 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.913245 4923 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.913257 4923 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.913268 4923 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.913278 4923 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.913288 4923 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.913299 4923 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 21 04:17:15 crc kubenswrapper[4923]: W0321 04:17:15.913308 4923 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.913359 4923 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.913708 4923 server.go:940] "Client rotation is on, will bootstrap in background" Mar 21 04:17:15 crc kubenswrapper[4923]: E0321 04:17:15.924943 4923 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.931472 4923 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.931651 4923 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.945074 4923 server.go:997] "Starting client certificate rotation" Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.945134 4923 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 21 04:17:15 crc kubenswrapper[4923]: I0321 04:17:15.945455 4923 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.025770 4923 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.030475 4923 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 21 04:17:16 crc kubenswrapper[4923]: E0321 04:17:16.030895 4923 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.053746 4923 log.go:25] "Validated CRI v1 runtime API" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.135045 4923 log.go:25] "Validated CRI v1 image API" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.138056 4923 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.145738 4923 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-03-21-04-13-16-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.145782 4923 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.171448 4923 manager.go:217] Machine: {Timestamp:2026-03-21 04:17:16.16513764 +0000 UTC m=+1.318148747 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:cc949092-1947-409d-b7e5-479c6a1f7b47 BootID:cf8d7db8-ca59-4aa1-a66b-77ca41867327 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:ff:5f:85 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:ff:5f:85 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:1b:ef:00 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:ef:f8:f6 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:de:9b:bf Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:53:dc:79 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:da:89:0e:f9:65:ed Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:d6:d2:a2:36:8c:46 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.171745 4923 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.172000 4923 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.172547 4923 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.172770 4923 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.172821 4923 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.173790 4923 topology_manager.go:138] "Creating topology manager with none policy" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.173817 4923 container_manager_linux.go:303] "Creating device plugin manager" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.174459 4923 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.174486 4923 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.174662 4923 state_mem.go:36] "Initialized new in-memory state store" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.174755 4923 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.201076 4923 kubelet.go:418] "Attempting to sync node with API server" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.201107 4923 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.201126 4923 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.201141 4923 kubelet.go:324] "Adding apiserver pod source" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.201153 4923 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.208945 4923 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Mar 21 04:17:16 crc kubenswrapper[4923]: W0321 04:17:16.216153 4923 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.216298 4923 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 21 04:17:16 crc kubenswrapper[4923]: E0321 04:17:16.216302 4923 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" Mar 21 04:17:16 crc kubenswrapper[4923]: W0321 04:17:16.235897 4923 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Mar 21 04:17:16 crc kubenswrapper[4923]: E0321 04:17:16.235975 4923 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.244471 4923 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.248252 4923 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.248304 4923 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.248344 4923 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.248360 4923 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.248385 4923 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.248399 4923 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.248412 4923 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.248435 4923 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.248451 4923 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.248469 4923 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.248489 4923 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.248502 4923 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.250539 4923 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.251020 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.251233 4923 server.go:1280] "Started kubelet" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.251387 4923 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.251619 4923 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.252748 4923 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 21 04:17:16 crc systemd[1]: Started Kubernetes Kubelet. Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.257725 4923 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.257775 4923 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.258248 4923 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.258298 4923 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.258489 4923 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 21 04:17:16 crc kubenswrapper[4923]: E0321 04:17:16.258580 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.274163 4923 factory.go:55] Registering systemd factory Mar 21 04:17:16 crc kubenswrapper[4923]: W0321 04:17:16.278778 4923 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Mar 21 04:17:16 crc kubenswrapper[4923]: E0321 04:17:16.278879 4923 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" Mar 21 04:17:16 crc kubenswrapper[4923]: E0321 04:17:16.279054 4923 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="200ms" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.279439 4923 factory.go:221] Registration of the systemd container factory successfully Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.282480 4923 server.go:460] "Adding debug handlers to kubelet server" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.285599 4923 factory.go:153] Registering CRI-O factory Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.285640 4923 factory.go:221] Registration of the crio container factory successfully Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.285755 4923 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.285803 4923 factory.go:103] Registering Raw factory Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.285831 4923 manager.go:1196] Started watching for new ooms in manager Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.286760 4923 manager.go:319] Starting recovery of all containers Mar 21 04:17:16 crc kubenswrapper[4923]: E0321 04:17:16.287576 4923 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.199:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189ec034cd9fc601 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.251186689 +0000 UTC m=+1.404197806,LastTimestamp:2026-03-21 04:17:16.251186689 +0000 UTC m=+1.404197806,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.306093 4923 manager.go:324] Recovery completed Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.306991 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307060 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307074 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307084 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307096 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307106 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307117 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307145 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307157 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307168 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307181 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307193 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307220 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307233 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307244 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307254 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307267 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307279 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307306 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307335 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307348 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307360 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307371 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307383 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307394 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307423 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307438 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307453 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307499 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307510 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307522 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307532 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307542 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307552 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307578 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307588 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307597 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307607 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307621 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307634 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307664 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307677 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307689 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307699 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307709 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.307734 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.309629 4923 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.309672 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.309684 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.309694 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.309707 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.309733 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.309744 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.309759 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.309772 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.309783 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.309809 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.309821 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.309834 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.309845 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.309856 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.309867 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.309891 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.309901 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.309911 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.309921 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.309941 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.309967 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.309978 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.309988 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310010 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310020 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310045 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310057 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310067 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310076 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310085 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310095 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310120 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310130 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310140 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310149 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310160 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310169 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310180 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310214 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310225 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310235 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310246 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310256 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310265 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310279 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310302 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310313 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310374 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310384 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310395 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310405 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310417 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310445 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310499 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310515 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310558 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310574 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310589 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310607 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310621 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310634 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310667 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310679 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310690 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310700 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310710 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310722 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310734 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310744 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310753 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310762 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310771 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310780 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310790 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310799 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310809 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310820 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310829 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310839 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310850 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310860 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310875 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310885 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310895 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310905 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310913 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310922 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310931 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310940 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310949 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310958 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310966 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310974 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310983 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.310991 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311001 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311012 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311020 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311044 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311054 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311063 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311072 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311080 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311090 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311099 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311107 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311116 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311129 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311140 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311149 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311157 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311165 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311175 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311185 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311195 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311205 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311214 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311222 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311231 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311239 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311248 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311259 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311268 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311276 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311285 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311293 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311302 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311310 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311336 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311355 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311364 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311375 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311385 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311394 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311403 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311412 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311421 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311430 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311439 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311450 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311459 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311468 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311476 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311486 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311495 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311504 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311512 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311522 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311532 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311541 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311550 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311557 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311564 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311573 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311582 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311590 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311598 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311607 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311614 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311622 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311631 4923 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311639 4923 reconstruct.go:97] "Volume reconstruction finished" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.311647 4923 reconciler.go:26] "Reconciler: start to sync state" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.321837 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.324121 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.324512 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.324532 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.325755 4923 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.325773 4923 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.325792 4923 state_mem.go:36] "Initialized new in-memory state store" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.355255 4923 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.357060 4923 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.357131 4923 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.357177 4923 kubelet.go:2335] "Starting kubelet main sync loop" Mar 21 04:17:16 crc kubenswrapper[4923]: E0321 04:17:16.357271 4923 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 21 04:17:16 crc kubenswrapper[4923]: E0321 04:17:16.358669 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:17:16 crc kubenswrapper[4923]: W0321 04:17:16.359554 4923 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Mar 21 04:17:16 crc kubenswrapper[4923]: E0321 04:17:16.359687 4923 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.380961 4923 policy_none.go:49] "None policy: Start" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.382047 4923 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.382084 4923 state_mem.go:35] "Initializing new in-memory state store" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.438738 4923 manager.go:334] "Starting Device Plugin manager" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.438898 4923 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.438913 4923 server.go:79] "Starting device plugin registration server" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.439597 4923 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.439615 4923 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.439802 4923 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.439934 4923 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.439950 4923 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 21 04:17:16 crc kubenswrapper[4923]: E0321 04:17:16.448676 4923 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.457967 4923 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.458087 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.469649 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.469687 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.469696 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.469825 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.470290 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.470418 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.470647 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.470701 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.470718 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.470878 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.471158 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.471244 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.471688 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.471726 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.471743 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.471893 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.472024 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.472046 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.472057 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.472090 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.472134 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.472381 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.472422 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.472444 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.473185 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.473224 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.473242 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.473291 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.473376 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.473389 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.473743 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.473869 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.473921 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.474858 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.474906 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.474926 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.475081 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.475105 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.475114 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.475301 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.475362 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.476119 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.476165 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.476178 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:16 crc kubenswrapper[4923]: E0321 04:17:16.480762 4923 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="400ms" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.514413 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.514478 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.539925 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.541091 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.541135 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.541150 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.541180 4923 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 21 04:17:16 crc kubenswrapper[4923]: E0321 04:17:16.541712 4923 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.199:6443: connect: connection refused" node="crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.615861 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.615905 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.615929 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.615950 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.615970 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.615992 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.616100 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.616178 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.616366 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.616440 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.616487 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.616507 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.616522 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.616557 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.616590 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.616605 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.616660 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.718158 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.718250 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.718500 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.718582 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.718618 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.718663 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.718714 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.718745 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.718806 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.718850 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.718897 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.718952 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.719031 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.719046 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.719133 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.719202 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.719223 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.719278 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.719395 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.719376 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.719403 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.719505 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.719551 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.719616 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.719683 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.719822 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.742583 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.744903 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.745084 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.745104 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.745170 4923 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 21 04:17:16 crc kubenswrapper[4923]: E0321 04:17:16.745947 4923 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.199:6443: connect: connection refused" node="crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.803595 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.822136 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.845911 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.869827 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: I0321 04:17:16.877431 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 04:17:16 crc kubenswrapper[4923]: W0321 04:17:16.879709 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-ca0a2d7615e92b6b676ea56da7df40b8fd78e67af8b71d3deb1d2918781d347d WatchSource:0}: Error finding container ca0a2d7615e92b6b676ea56da7df40b8fd78e67af8b71d3deb1d2918781d347d: Status 404 returned error can't find the container with id ca0a2d7615e92b6b676ea56da7df40b8fd78e67af8b71d3deb1d2918781d347d Mar 21 04:17:16 crc kubenswrapper[4923]: E0321 04:17:16.882031 4923 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="800ms" Mar 21 04:17:16 crc kubenswrapper[4923]: W0321 04:17:16.882154 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-1f415780324cf8d4425fa58542fb4ea1d2eecafc1d572608b2c81d1ef8e4c4f4 WatchSource:0}: Error finding container 1f415780324cf8d4425fa58542fb4ea1d2eecafc1d572608b2c81d1ef8e4c4f4: Status 404 returned error can't find the container with id 1f415780324cf8d4425fa58542fb4ea1d2eecafc1d572608b2c81d1ef8e4c4f4 Mar 21 04:17:16 crc kubenswrapper[4923]: W0321 04:17:16.896337 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-d0784668f95c9410438fa144caa500b4fda4fc9a0d3da1bd97765f5b8442c55f WatchSource:0}: Error finding container d0784668f95c9410438fa144caa500b4fda4fc9a0d3da1bd97765f5b8442c55f: Status 404 returned error can't find the container with id d0784668f95c9410438fa144caa500b4fda4fc9a0d3da1bd97765f5b8442c55f Mar 21 04:17:16 crc kubenswrapper[4923]: W0321 04:17:16.900617 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-0f5fdaa08e45ef201163bb08ae1315520da58d8d52fe357257bead793e749629 WatchSource:0}: Error finding container 0f5fdaa08e45ef201163bb08ae1315520da58d8d52fe357257bead793e749629: Status 404 returned error can't find the container with id 0f5fdaa08e45ef201163bb08ae1315520da58d8d52fe357257bead793e749629 Mar 21 04:17:17 crc kubenswrapper[4923]: W0321 04:17:17.060948 4923 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Mar 21 04:17:17 crc kubenswrapper[4923]: E0321 04:17:17.061361 4923 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" Mar 21 04:17:17 crc kubenswrapper[4923]: I0321 04:17:17.146298 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:17 crc kubenswrapper[4923]: I0321 04:17:17.147873 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:17 crc kubenswrapper[4923]: I0321 04:17:17.147916 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:17 crc kubenswrapper[4923]: I0321 04:17:17.147935 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:17 crc kubenswrapper[4923]: I0321 04:17:17.147963 4923 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 21 04:17:17 crc kubenswrapper[4923]: E0321 04:17:17.148399 4923 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.199:6443: connect: connection refused" node="crc" Mar 21 04:17:17 crc kubenswrapper[4923]: I0321 04:17:17.252551 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Mar 21 04:17:17 crc kubenswrapper[4923]: I0321 04:17:17.366162 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d0784668f95c9410438fa144caa500b4fda4fc9a0d3da1bd97765f5b8442c55f"} Mar 21 04:17:17 crc kubenswrapper[4923]: I0321 04:17:17.367938 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bf900cadb98be3d9063f2fd084cbad15378c446862d315ca13ade641ef54a1da"} Mar 21 04:17:17 crc kubenswrapper[4923]: I0321 04:17:17.369894 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"1f415780324cf8d4425fa58542fb4ea1d2eecafc1d572608b2c81d1ef8e4c4f4"} Mar 21 04:17:17 crc kubenswrapper[4923]: I0321 04:17:17.371811 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ca0a2d7615e92b6b676ea56da7df40b8fd78e67af8b71d3deb1d2918781d347d"} Mar 21 04:17:17 crc kubenswrapper[4923]: I0321 04:17:17.373427 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0f5fdaa08e45ef201163bb08ae1315520da58d8d52fe357257bead793e749629"} Mar 21 04:17:17 crc kubenswrapper[4923]: E0321 04:17:17.414541 4923 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.199:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189ec034cd9fc601 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.251186689 +0000 UTC m=+1.404197806,LastTimestamp:2026-03-21 04:17:16.251186689 +0000 UTC m=+1.404197806,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:17:17 crc kubenswrapper[4923]: W0321 04:17:17.522703 4923 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Mar 21 04:17:17 crc kubenswrapper[4923]: E0321 04:17:17.522818 4923 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" Mar 21 04:17:17 crc kubenswrapper[4923]: W0321 04:17:17.653757 4923 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Mar 21 04:17:17 crc kubenswrapper[4923]: E0321 04:17:17.653867 4923 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" Mar 21 04:17:17 crc kubenswrapper[4923]: E0321 04:17:17.683239 4923 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="1.6s" Mar 21 04:17:17 crc kubenswrapper[4923]: W0321 04:17:17.844639 4923 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Mar 21 04:17:17 crc kubenswrapper[4923]: E0321 04:17:17.844746 4923 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" Mar 21 04:17:17 crc kubenswrapper[4923]: I0321 04:17:17.949406 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:17 crc kubenswrapper[4923]: I0321 04:17:17.950938 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:17 crc kubenswrapper[4923]: I0321 04:17:17.950969 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:17 crc kubenswrapper[4923]: I0321 04:17:17.950980 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:17 crc kubenswrapper[4923]: I0321 04:17:17.951007 4923 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 21 04:17:17 crc kubenswrapper[4923]: E0321 04:17:17.951469 4923 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.199:6443: connect: connection refused" node="crc" Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.227689 4923 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 21 04:17:18 crc kubenswrapper[4923]: E0321 04:17:18.229435 4923 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.252761 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.378568 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"929c4a6fd1d34acd75de2e0ae728dd6a6628aefce5eb1e246e2324b562b6ab5c"} Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.378620 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a3c917c628db6baeed175000981dac16ae873338ec2de4eff09c21980a58785e"} Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.380598 4923 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b5e8ebcf8706695d71fdfa822592c1bac2cd511db956c5cfa7096840933042bc" exitCode=0 Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.380664 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b5e8ebcf8706695d71fdfa822592c1bac2cd511db956c5cfa7096840933042bc"} Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.380775 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.381831 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.381873 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.381888 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.382629 4923 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e183fc0c093c5893127609454416420946b51e8f1ee2332612e3b09dd43f88b4" exitCode=0 Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.382684 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e183fc0c093c5893127609454416420946b51e8f1ee2332612e3b09dd43f88b4"} Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.382819 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.383473 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.383911 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.383938 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.383949 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.384291 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.384334 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.384362 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.384932 4923 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="0e46424958e68afb6fc6e4784275047e6dcfce223e753bc4f31a4e6f131a393e" exitCode=0 Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.384983 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.385034 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"0e46424958e68afb6fc6e4784275047e6dcfce223e753bc4f31a4e6f131a393e"} Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.385750 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.385804 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.385818 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.387020 4923 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="356fb026c2af2887c039ea10c7b989c429695b96d968309935d0b180df43ca7a" exitCode=0 Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.387043 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"356fb026c2af2887c039ea10c7b989c429695b96d968309935d0b180df43ca7a"} Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.387168 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.388097 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.388142 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:18 crc kubenswrapper[4923]: I0321 04:17:18.388160 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.252022 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Mar 21 04:17:19 crc kubenswrapper[4923]: E0321 04:17:19.283851 4923 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="3.2s" Mar 21 04:17:19 crc kubenswrapper[4923]: W0321 04:17:19.284039 4923 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Mar 21 04:17:19 crc kubenswrapper[4923]: E0321 04:17:19.284148 4923 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" Mar 21 04:17:19 crc kubenswrapper[4923]: W0321 04:17:19.381682 4923 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Mar 21 04:17:19 crc kubenswrapper[4923]: E0321 04:17:19.381786 4923 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.390432 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"759950f1ffb44fd2cb2df367b5347dd03ea2b0aeeb150258689cc18ac37875a4"} Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.390488 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.391292 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.391335 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.391345 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.392410 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a89d224667797e4af1358a9e154fa9418def403ba83a5bc7e1c4bf9c0c15a8be"} Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.392436 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"eaaf8507a64193170ffe695d37631440b1e7f9ebdb3db7e25c09849666b17576"} Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.392447 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"df8d6abc4d4d92edd75873e470a7d6f9286b3434106c44706bfe3510ce7f375f"} Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.392489 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.393438 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.393466 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.393478 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.394216 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cd332d7bdc1aa01d1de78894731d3bd7b7cf365080d0a72619c1dfaeea62a698"} Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.394249 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"69667bd111f27cd8d08c2fc0e27eba7f7fc78dece35d96679839f8a0b6253a56"} Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.394283 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.397515 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.397559 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.397572 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.400104 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e95af86be0d3a51c05ab8ddbf8e9be317ea430fcc6b7efb8dac90dfca843deec"} Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.400144 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"860ca01d512db7ed01ca9aac8b52452665cee86f8d0708c5793071548f0ea734"} Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.400155 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ddf3580c0287adb92a6be49eb1851df029fb50540434a0481e695f818f1a39de"} Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.401961 4923 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="61e8fc7f931d6e2911758e912b760f08c29d20d0a283c45466dc3634e403773e" exitCode=0 Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.402009 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"61e8fc7f931d6e2911758e912b760f08c29d20d0a283c45466dc3634e403773e"} Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.402093 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.402988 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.403023 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.403033 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:19 crc kubenswrapper[4923]: W0321 04:17:19.522093 4923 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Mar 21 04:17:19 crc kubenswrapper[4923]: E0321 04:17:19.522198 4923 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.552106 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.553239 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.553281 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.553295 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.553341 4923 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 21 04:17:19 crc kubenswrapper[4923]: E0321 04:17:19.553905 4923 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.199:6443: connect: connection refused" node="crc" Mar 21 04:17:19 crc kubenswrapper[4923]: I0321 04:17:19.691063 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 21 04:17:20 crc kubenswrapper[4923]: W0321 04:17:20.201824 4923 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Mar 21 04:17:20 crc kubenswrapper[4923]: E0321 04:17:20.201933 4923 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" Mar 21 04:17:20 crc kubenswrapper[4923]: I0321 04:17:20.251645 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Mar 21 04:17:20 crc kubenswrapper[4923]: I0321 04:17:20.408533 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f85f08c3159ae09902a1359de02b66fc46e185d454c93f3ce6e0458d2fa6c245"} Mar 21 04:17:20 crc kubenswrapper[4923]: I0321 04:17:20.408627 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7e55413df017e33bb843603b26a5b32281456dc52c62f71753c178a8525d6c97"} Mar 21 04:17:20 crc kubenswrapper[4923]: I0321 04:17:20.408654 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:20 crc kubenswrapper[4923]: I0321 04:17:20.409502 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:20 crc kubenswrapper[4923]: I0321 04:17:20.409551 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:20 crc kubenswrapper[4923]: I0321 04:17:20.409569 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:20 crc kubenswrapper[4923]: I0321 04:17:20.410942 4923 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c1bc06442e3629ea1bee6fa12520d8b6a0b4ff8b597f724d590d1991f282fcac" exitCode=0 Mar 21 04:17:20 crc kubenswrapper[4923]: I0321 04:17:20.410977 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c1bc06442e3629ea1bee6fa12520d8b6a0b4ff8b597f724d590d1991f282fcac"} Mar 21 04:17:20 crc kubenswrapper[4923]: I0321 04:17:20.411049 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:20 crc kubenswrapper[4923]: I0321 04:17:20.411080 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:20 crc kubenswrapper[4923]: I0321 04:17:20.411054 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:20 crc kubenswrapper[4923]: I0321 04:17:20.411054 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:20 crc kubenswrapper[4923]: I0321 04:17:20.413939 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:20 crc kubenswrapper[4923]: I0321 04:17:20.413979 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:20 crc kubenswrapper[4923]: I0321 04:17:20.413992 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:20 crc kubenswrapper[4923]: I0321 04:17:20.414010 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:20 crc kubenswrapper[4923]: I0321 04:17:20.414057 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:20 crc kubenswrapper[4923]: I0321 04:17:20.414026 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:20 crc kubenswrapper[4923]: I0321 04:17:20.414087 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:20 crc kubenswrapper[4923]: I0321 04:17:20.414107 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:20 crc kubenswrapper[4923]: I0321 04:17:20.414122 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:20 crc kubenswrapper[4923]: I0321 04:17:20.414633 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:20 crc kubenswrapper[4923]: I0321 04:17:20.414680 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:20 crc kubenswrapper[4923]: I0321 04:17:20.414699 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:21 crc kubenswrapper[4923]: I0321 04:17:21.416751 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 21 04:17:21 crc kubenswrapper[4923]: I0321 04:17:21.420541 4923 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f85f08c3159ae09902a1359de02b66fc46e185d454c93f3ce6e0458d2fa6c245" exitCode=255 Mar 21 04:17:21 crc kubenswrapper[4923]: I0321 04:17:21.420644 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:21 crc kubenswrapper[4923]: I0321 04:17:21.420671 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f85f08c3159ae09902a1359de02b66fc46e185d454c93f3ce6e0458d2fa6c245"} Mar 21 04:17:21 crc kubenswrapper[4923]: I0321 04:17:21.421670 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:21 crc kubenswrapper[4923]: I0321 04:17:21.421709 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:21 crc kubenswrapper[4923]: I0321 04:17:21.421726 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:21 crc kubenswrapper[4923]: I0321 04:17:21.422436 4923 scope.go:117] "RemoveContainer" containerID="f85f08c3159ae09902a1359de02b66fc46e185d454c93f3ce6e0458d2fa6c245" Mar 21 04:17:21 crc kubenswrapper[4923]: I0321 04:17:21.426021 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"962f4e1df514ea44fe32aa2fa06542b39d0682078ea54dd5bca86be5db9ab4ea"} Mar 21 04:17:21 crc kubenswrapper[4923]: I0321 04:17:21.426086 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b1b762a8cd6fe3ec100afaa06c771d99db1d83c762933d0c4843c5339bdae80a"} Mar 21 04:17:21 crc kubenswrapper[4923]: I0321 04:17:21.426107 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a0df3f79c5b5b928fe3a2f81ec5889f555eb71043135cf84eead89fdaeb39a4a"} Mar 21 04:17:21 crc kubenswrapper[4923]: I0321 04:17:21.426131 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:21 crc kubenswrapper[4923]: I0321 04:17:21.427637 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:21 crc kubenswrapper[4923]: I0321 04:17:21.427682 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:21 crc kubenswrapper[4923]: I0321 04:17:21.427697 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:21 crc kubenswrapper[4923]: I0321 04:17:21.794508 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:17:22 crc kubenswrapper[4923]: I0321 04:17:22.372716 4923 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 21 04:17:22 crc kubenswrapper[4923]: I0321 04:17:22.432231 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 21 04:17:22 crc kubenswrapper[4923]: I0321 04:17:22.434643 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4620edf5361689f0eea8a609b411d922d1bb24cd76f7ed1c3627923f0c10a678"} Mar 21 04:17:22 crc kubenswrapper[4923]: I0321 04:17:22.434777 4923 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 04:17:22 crc kubenswrapper[4923]: I0321 04:17:22.434836 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:22 crc kubenswrapper[4923]: I0321 04:17:22.435976 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:22 crc kubenswrapper[4923]: I0321 04:17:22.436059 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:22 crc kubenswrapper[4923]: I0321 04:17:22.436084 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:22 crc kubenswrapper[4923]: I0321 04:17:22.439258 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5781ecdc447db84a748e35a3a58ee43ecaca8afa904153142daa2b8322106010"} Mar 21 04:17:22 crc kubenswrapper[4923]: I0321 04:17:22.439308 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"40803142bb8abda3c2b04d75732a3da699e1a526a7b890ed540d944b681be3b7"} Mar 21 04:17:22 crc kubenswrapper[4923]: I0321 04:17:22.439430 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:22 crc kubenswrapper[4923]: I0321 04:17:22.440602 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:22 crc kubenswrapper[4923]: I0321 04:17:22.440653 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:22 crc kubenswrapper[4923]: I0321 04:17:22.440677 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:22 crc kubenswrapper[4923]: I0321 04:17:22.754288 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:22 crc kubenswrapper[4923]: I0321 04:17:22.755989 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:22 crc kubenswrapper[4923]: I0321 04:17:22.756031 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:22 crc kubenswrapper[4923]: I0321 04:17:22.756043 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:22 crc kubenswrapper[4923]: I0321 04:17:22.756072 4923 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 21 04:17:23 crc kubenswrapper[4923]: I0321 04:17:23.442817 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:23 crc kubenswrapper[4923]: I0321 04:17:23.442918 4923 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 04:17:23 crc kubenswrapper[4923]: I0321 04:17:23.443025 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:23 crc kubenswrapper[4923]: I0321 04:17:23.443898 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:23 crc kubenswrapper[4923]: I0321 04:17:23.444072 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:23 crc kubenswrapper[4923]: I0321 04:17:23.444134 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:23 crc kubenswrapper[4923]: I0321 04:17:23.444445 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:23 crc kubenswrapper[4923]: I0321 04:17:23.444502 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:23 crc kubenswrapper[4923]: I0321 04:17:23.444517 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:23 crc kubenswrapper[4923]: I0321 04:17:23.602950 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 04:17:23 crc kubenswrapper[4923]: I0321 04:17:23.603513 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:23 crc kubenswrapper[4923]: I0321 04:17:23.605610 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:23 crc kubenswrapper[4923]: I0321 04:17:23.605730 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:23 crc kubenswrapper[4923]: I0321 04:17:23.605755 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:23 crc kubenswrapper[4923]: I0321 04:17:23.613229 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 04:17:24 crc kubenswrapper[4923]: I0321 04:17:24.372379 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 04:17:24 crc kubenswrapper[4923]: I0321 04:17:24.445405 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:24 crc kubenswrapper[4923]: I0321 04:17:24.447226 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:24 crc kubenswrapper[4923]: I0321 04:17:24.447300 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:24 crc kubenswrapper[4923]: I0321 04:17:24.447364 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:24 crc kubenswrapper[4923]: I0321 04:17:24.546035 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:17:24 crc kubenswrapper[4923]: I0321 04:17:24.546224 4923 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 04:17:24 crc kubenswrapper[4923]: I0321 04:17:24.546276 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:24 crc kubenswrapper[4923]: I0321 04:17:24.547462 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:24 crc kubenswrapper[4923]: I0321 04:17:24.547518 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:24 crc kubenswrapper[4923]: I0321 04:17:24.547538 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:24 crc kubenswrapper[4923]: I0321 04:17:24.739029 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 04:17:25 crc kubenswrapper[4923]: I0321 04:17:25.448397 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:25 crc kubenswrapper[4923]: I0321 04:17:25.450001 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:25 crc kubenswrapper[4923]: I0321 04:17:25.450057 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:25 crc kubenswrapper[4923]: I0321 04:17:25.450078 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:25 crc kubenswrapper[4923]: I0321 04:17:25.851502 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Mar 21 04:17:25 crc kubenswrapper[4923]: I0321 04:17:25.851780 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:25 crc kubenswrapper[4923]: I0321 04:17:25.853285 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:25 crc kubenswrapper[4923]: I0321 04:17:25.853348 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:25 crc kubenswrapper[4923]: I0321 04:17:25.853364 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:26 crc kubenswrapper[4923]: I0321 04:17:26.140827 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:17:26 crc kubenswrapper[4923]: I0321 04:17:26.141093 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:26 crc kubenswrapper[4923]: I0321 04:17:26.142992 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:26 crc kubenswrapper[4923]: I0321 04:17:26.143044 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:26 crc kubenswrapper[4923]: I0321 04:17:26.143062 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:26 crc kubenswrapper[4923]: I0321 04:17:26.415773 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 04:17:26 crc kubenswrapper[4923]: E0321 04:17:26.449017 4923 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 21 04:17:26 crc kubenswrapper[4923]: I0321 04:17:26.450224 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:26 crc kubenswrapper[4923]: I0321 04:17:26.451739 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:26 crc kubenswrapper[4923]: I0321 04:17:26.451797 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:26 crc kubenswrapper[4923]: I0321 04:17:26.451818 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:27 crc kubenswrapper[4923]: I0321 04:17:27.453690 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:27 crc kubenswrapper[4923]: I0321 04:17:27.455312 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:27 crc kubenswrapper[4923]: I0321 04:17:27.455418 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:27 crc kubenswrapper[4923]: I0321 04:17:27.455441 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:27 crc kubenswrapper[4923]: I0321 04:17:27.463073 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 04:17:28 crc kubenswrapper[4923]: I0321 04:17:28.457229 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:28 crc kubenswrapper[4923]: I0321 04:17:28.458716 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:28 crc kubenswrapper[4923]: I0321 04:17:28.458767 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:28 crc kubenswrapper[4923]: I0321 04:17:28.458780 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:29 crc kubenswrapper[4923]: I0321 04:17:29.416706 4923 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 21 04:17:29 crc kubenswrapper[4923]: I0321 04:17:29.416861 4923 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 21 04:17:30 crc kubenswrapper[4923]: I0321 04:17:30.968552 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:30Z is after 2026-02-23T05:33:13Z Mar 21 04:17:30 crc kubenswrapper[4923]: E0321 04:17:30.972975 4923 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:30Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 21 04:17:30 crc kubenswrapper[4923]: E0321 04:17:30.974677 4923 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:30Z is after 2026-02-23T05:33:13Z" interval="6.4s" Mar 21 04:17:30 crc kubenswrapper[4923]: E0321 04:17:30.976243 4923 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:30Z is after 2026-02-23T05:33:13Z" node="crc" Mar 21 04:17:30 crc kubenswrapper[4923]: W0321 04:17:30.978025 4923 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:30Z is after 2026-02-23T05:33:13Z Mar 21 04:17:30 crc kubenswrapper[4923]: E0321 04:17:30.978105 4923 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:30Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 21 04:17:30 crc kubenswrapper[4923]: W0321 04:17:30.979644 4923 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:30Z is after 2026-02-23T05:33:13Z Mar 21 04:17:30 crc kubenswrapper[4923]: E0321 04:17:30.979711 4923 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:30Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 21 04:17:30 crc kubenswrapper[4923]: W0321 04:17:30.983072 4923 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:30Z is after 2026-02-23T05:33:13Z Mar 21 04:17:30 crc kubenswrapper[4923]: E0321 04:17:30.983127 4923 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:30Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 21 04:17:30 crc kubenswrapper[4923]: E0321 04:17:30.985621 4923 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:30Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189ec034cd9fc601 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.251186689 +0000 UTC m=+1.404197806,LastTimestamp:2026-03-21 04:17:16.251186689 +0000 UTC m=+1.404197806,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:17:30 crc kubenswrapper[4923]: W0321 04:17:30.987378 4923 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:30Z is after 2026-02-23T05:33:13Z Mar 21 04:17:30 crc kubenswrapper[4923]: E0321 04:17:30.987447 4923 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:30Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 21 04:17:30 crc kubenswrapper[4923]: I0321 04:17:30.988393 4923 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Mar 21 04:17:30 crc kubenswrapper[4923]: I0321 04:17:30.988433 4923 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 21 04:17:30 crc kubenswrapper[4923]: I0321 04:17:30.993358 4923 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Mar 21 04:17:30 crc kubenswrapper[4923]: I0321 04:17:30.993424 4923 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 21 04:17:31 crc kubenswrapper[4923]: I0321 04:17:31.254032 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:31Z is after 2026-02-23T05:33:13Z Mar 21 04:17:31 crc kubenswrapper[4923]: I0321 04:17:31.466577 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 21 04:17:31 crc kubenswrapper[4923]: I0321 04:17:31.467172 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 21 04:17:31 crc kubenswrapper[4923]: I0321 04:17:31.469394 4923 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4620edf5361689f0eea8a609b411d922d1bb24cd76f7ed1c3627923f0c10a678" exitCode=255 Mar 21 04:17:31 crc kubenswrapper[4923]: I0321 04:17:31.469457 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4620edf5361689f0eea8a609b411d922d1bb24cd76f7ed1c3627923f0c10a678"} Mar 21 04:17:31 crc kubenswrapper[4923]: I0321 04:17:31.469589 4923 scope.go:117] "RemoveContainer" containerID="f85f08c3159ae09902a1359de02b66fc46e185d454c93f3ce6e0458d2fa6c245" Mar 21 04:17:31 crc kubenswrapper[4923]: I0321 04:17:31.469754 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:31 crc kubenswrapper[4923]: I0321 04:17:31.470808 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:31 crc kubenswrapper[4923]: I0321 04:17:31.470860 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:31 crc kubenswrapper[4923]: I0321 04:17:31.470881 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:31 crc kubenswrapper[4923]: I0321 04:17:31.471736 4923 scope.go:117] "RemoveContainer" containerID="4620edf5361689f0eea8a609b411d922d1bb24cd76f7ed1c3627923f0c10a678" Mar 21 04:17:31 crc kubenswrapper[4923]: E0321 04:17:31.472024 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 21 04:17:31 crc kubenswrapper[4923]: I0321 04:17:31.833235 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Mar 21 04:17:31 crc kubenswrapper[4923]: I0321 04:17:31.833613 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:31 crc kubenswrapper[4923]: I0321 04:17:31.835213 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:31 crc kubenswrapper[4923]: I0321 04:17:31.835256 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:31 crc kubenswrapper[4923]: I0321 04:17:31.835267 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:31 crc kubenswrapper[4923]: I0321 04:17:31.878189 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Mar 21 04:17:32 crc kubenswrapper[4923]: I0321 04:17:32.254724 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:32Z is after 2026-02-23T05:33:13Z Mar 21 04:17:32 crc kubenswrapper[4923]: I0321 04:17:32.475171 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 21 04:17:32 crc kubenswrapper[4923]: I0321 04:17:32.478526 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:32 crc kubenswrapper[4923]: I0321 04:17:32.479772 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:32 crc kubenswrapper[4923]: I0321 04:17:32.479826 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:32 crc kubenswrapper[4923]: I0321 04:17:32.479849 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:32 crc kubenswrapper[4923]: I0321 04:17:32.497862 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Mar 21 04:17:33 crc kubenswrapper[4923]: I0321 04:17:33.257960 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:33Z is after 2026-02-23T05:33:13Z Mar 21 04:17:33 crc kubenswrapper[4923]: I0321 04:17:33.480965 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:33 crc kubenswrapper[4923]: I0321 04:17:33.482259 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:33 crc kubenswrapper[4923]: I0321 04:17:33.482369 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:33 crc kubenswrapper[4923]: I0321 04:17:33.482398 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:34 crc kubenswrapper[4923]: I0321 04:17:34.257866 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:34Z is after 2026-02-23T05:33:13Z Mar 21 04:17:34 crc kubenswrapper[4923]: I0321 04:17:34.555020 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:17:34 crc kubenswrapper[4923]: I0321 04:17:34.555252 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:34 crc kubenswrapper[4923]: I0321 04:17:34.556727 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:34 crc kubenswrapper[4923]: I0321 04:17:34.556797 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:34 crc kubenswrapper[4923]: I0321 04:17:34.556824 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:34 crc kubenswrapper[4923]: I0321 04:17:34.557783 4923 scope.go:117] "RemoveContainer" containerID="4620edf5361689f0eea8a609b411d922d1bb24cd76f7ed1c3627923f0c10a678" Mar 21 04:17:34 crc kubenswrapper[4923]: E0321 04:17:34.558075 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 21 04:17:34 crc kubenswrapper[4923]: I0321 04:17:34.562579 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:17:35 crc kubenswrapper[4923]: I0321 04:17:35.257157 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:35Z is after 2026-02-23T05:33:13Z Mar 21 04:17:35 crc kubenswrapper[4923]: I0321 04:17:35.485912 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:35 crc kubenswrapper[4923]: I0321 04:17:35.486715 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:35 crc kubenswrapper[4923]: I0321 04:17:35.486801 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:35 crc kubenswrapper[4923]: I0321 04:17:35.486818 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:35 crc kubenswrapper[4923]: I0321 04:17:35.487808 4923 scope.go:117] "RemoveContainer" containerID="4620edf5361689f0eea8a609b411d922d1bb24cd76f7ed1c3627923f0c10a678" Mar 21 04:17:35 crc kubenswrapper[4923]: E0321 04:17:35.488077 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 21 04:17:36 crc kubenswrapper[4923]: I0321 04:17:36.255090 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:36Z is after 2026-02-23T05:33:13Z Mar 21 04:17:36 crc kubenswrapper[4923]: E0321 04:17:36.449393 4923 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 21 04:17:36 crc kubenswrapper[4923]: I0321 04:17:36.795640 4923 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:17:36 crc kubenswrapper[4923]: I0321 04:17:36.795854 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:36 crc kubenswrapper[4923]: I0321 04:17:36.797406 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:36 crc kubenswrapper[4923]: I0321 04:17:36.797458 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:36 crc kubenswrapper[4923]: I0321 04:17:36.797475 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:36 crc kubenswrapper[4923]: I0321 04:17:36.798245 4923 scope.go:117] "RemoveContainer" containerID="4620edf5361689f0eea8a609b411d922d1bb24cd76f7ed1c3627923f0c10a678" Mar 21 04:17:36 crc kubenswrapper[4923]: E0321 04:17:36.798551 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 21 04:17:37 crc kubenswrapper[4923]: I0321 04:17:37.256478 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:37Z is after 2026-02-23T05:33:13Z Mar 21 04:17:37 crc kubenswrapper[4923]: I0321 04:17:37.376611 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:37 crc kubenswrapper[4923]: I0321 04:17:37.378236 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:37 crc kubenswrapper[4923]: I0321 04:17:37.378517 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:37 crc kubenswrapper[4923]: I0321 04:17:37.378672 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:37 crc kubenswrapper[4923]: I0321 04:17:37.378924 4923 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 21 04:17:37 crc kubenswrapper[4923]: E0321 04:17:37.380067 4923 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:37Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 21 04:17:37 crc kubenswrapper[4923]: E0321 04:17:37.384073 4923 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:37Z is after 2026-02-23T05:33:13Z" node="crc" Mar 21 04:17:38 crc kubenswrapper[4923]: I0321 04:17:38.256989 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:38Z is after 2026-02-23T05:33:13Z Mar 21 04:17:38 crc kubenswrapper[4923]: W0321 04:17:38.745915 4923 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:38Z is after 2026-02-23T05:33:13Z Mar 21 04:17:38 crc kubenswrapper[4923]: E0321 04:17:38.746051 4923 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:38Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 21 04:17:39 crc kubenswrapper[4923]: I0321 04:17:39.255525 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:39Z is after 2026-02-23T05:33:13Z Mar 21 04:17:39 crc kubenswrapper[4923]: I0321 04:17:39.417218 4923 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 21 04:17:39 crc kubenswrapper[4923]: I0321 04:17:39.417291 4923 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 21 04:17:39 crc kubenswrapper[4923]: I0321 04:17:39.731241 4923 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 21 04:17:39 crc kubenswrapper[4923]: E0321 04:17:39.738755 4923 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:39Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 21 04:17:39 crc kubenswrapper[4923]: W0321 04:17:39.814572 4923 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:39Z is after 2026-02-23T05:33:13Z Mar 21 04:17:39 crc kubenswrapper[4923]: E0321 04:17:39.814680 4923 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:39Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 21 04:17:40 crc kubenswrapper[4923]: I0321 04:17:40.257168 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:40Z is after 2026-02-23T05:33:13Z Mar 21 04:17:40 crc kubenswrapper[4923]: W0321 04:17:40.580506 4923 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:40Z is after 2026-02-23T05:33:13Z Mar 21 04:17:40 crc kubenswrapper[4923]: E0321 04:17:40.581389 4923 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:40Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 21 04:17:40 crc kubenswrapper[4923]: W0321 04:17:40.777856 4923 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:40Z is after 2026-02-23T05:33:13Z Mar 21 04:17:40 crc kubenswrapper[4923]: E0321 04:17:40.777925 4923 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:40Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 21 04:17:40 crc kubenswrapper[4923]: E0321 04:17:40.992710 4923 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:40Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189ec034cd9fc601 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.251186689 +0000 UTC m=+1.404197806,LastTimestamp:2026-03-21 04:17:16.251186689 +0000 UTC m=+1.404197806,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:17:41 crc kubenswrapper[4923]: I0321 04:17:41.257179 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:41Z is after 2026-02-23T05:33:13Z Mar 21 04:17:42 crc kubenswrapper[4923]: I0321 04:17:42.257400 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:42Z is after 2026-02-23T05:33:13Z Mar 21 04:17:43 crc kubenswrapper[4923]: I0321 04:17:43.255450 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:43Z is after 2026-02-23T05:33:13Z Mar 21 04:17:44 crc kubenswrapper[4923]: I0321 04:17:44.256850 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:44Z is after 2026-02-23T05:33:13Z Mar 21 04:17:44 crc kubenswrapper[4923]: I0321 04:17:44.384443 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:44 crc kubenswrapper[4923]: I0321 04:17:44.386218 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:44 crc kubenswrapper[4923]: I0321 04:17:44.386290 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:44 crc kubenswrapper[4923]: I0321 04:17:44.386312 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:44 crc kubenswrapper[4923]: I0321 04:17:44.386425 4923 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 21 04:17:44 crc kubenswrapper[4923]: E0321 04:17:44.386960 4923 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:44Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 21 04:17:44 crc kubenswrapper[4923]: E0321 04:17:44.390715 4923 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:44Z is after 2026-02-23T05:33:13Z" node="crc" Mar 21 04:17:45 crc kubenswrapper[4923]: I0321 04:17:45.254918 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:45Z is after 2026-02-23T05:33:13Z Mar 21 04:17:46 crc kubenswrapper[4923]: I0321 04:17:46.255022 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:46Z is after 2026-02-23T05:33:13Z Mar 21 04:17:46 crc kubenswrapper[4923]: E0321 04:17:46.449543 4923 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 21 04:17:47 crc kubenswrapper[4923]: I0321 04:17:47.255807 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:47Z is after 2026-02-23T05:33:13Z Mar 21 04:17:48 crc kubenswrapper[4923]: I0321 04:17:48.255745 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:48Z is after 2026-02-23T05:33:13Z Mar 21 04:17:48 crc kubenswrapper[4923]: I0321 04:17:48.357574 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:48 crc kubenswrapper[4923]: I0321 04:17:48.358751 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:48 crc kubenswrapper[4923]: I0321 04:17:48.358808 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:48 crc kubenswrapper[4923]: I0321 04:17:48.358821 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:48 crc kubenswrapper[4923]: I0321 04:17:48.359523 4923 scope.go:117] "RemoveContainer" containerID="4620edf5361689f0eea8a609b411d922d1bb24cd76f7ed1c3627923f0c10a678" Mar 21 04:17:49 crc kubenswrapper[4923]: I0321 04:17:49.255808 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:49Z is after 2026-02-23T05:33:13Z Mar 21 04:17:49 crc kubenswrapper[4923]: I0321 04:17:49.416383 4923 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 21 04:17:49 crc kubenswrapper[4923]: I0321 04:17:49.416539 4923 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 21 04:17:49 crc kubenswrapper[4923]: I0321 04:17:49.416629 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 04:17:49 crc kubenswrapper[4923]: I0321 04:17:49.416831 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:49 crc kubenswrapper[4923]: I0321 04:17:49.419743 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:49 crc kubenswrapper[4923]: I0321 04:17:49.419788 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:49 crc kubenswrapper[4923]: I0321 04:17:49.419801 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:49 crc kubenswrapper[4923]: I0321 04:17:49.420586 4923 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"929c4a6fd1d34acd75de2e0ae728dd6a6628aefce5eb1e246e2324b562b6ab5c"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 21 04:17:49 crc kubenswrapper[4923]: I0321 04:17:49.420827 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://929c4a6fd1d34acd75de2e0ae728dd6a6628aefce5eb1e246e2324b562b6ab5c" gracePeriod=30 Mar 21 04:17:49 crc kubenswrapper[4923]: I0321 04:17:49.533650 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 21 04:17:49 crc kubenswrapper[4923]: I0321 04:17:49.534424 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 21 04:17:49 crc kubenswrapper[4923]: I0321 04:17:49.537301 4923 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="35be0f159b3e2b942a8f86191a9011e4072511e14ca6ae37ad5cb9e68768ca09" exitCode=255 Mar 21 04:17:49 crc kubenswrapper[4923]: I0321 04:17:49.537415 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"35be0f159b3e2b942a8f86191a9011e4072511e14ca6ae37ad5cb9e68768ca09"} Mar 21 04:17:49 crc kubenswrapper[4923]: I0321 04:17:49.537482 4923 scope.go:117] "RemoveContainer" containerID="4620edf5361689f0eea8a609b411d922d1bb24cd76f7ed1c3627923f0c10a678" Mar 21 04:17:49 crc kubenswrapper[4923]: I0321 04:17:49.537677 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:49 crc kubenswrapper[4923]: I0321 04:17:49.539018 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:49 crc kubenswrapper[4923]: I0321 04:17:49.539078 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:49 crc kubenswrapper[4923]: I0321 04:17:49.539103 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:49 crc kubenswrapper[4923]: I0321 04:17:49.540089 4923 scope.go:117] "RemoveContainer" containerID="35be0f159b3e2b942a8f86191a9011e4072511e14ca6ae37ad5cb9e68768ca09" Mar 21 04:17:49 crc kubenswrapper[4923]: E0321 04:17:49.540470 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 21 04:17:50 crc kubenswrapper[4923]: I0321 04:17:50.255701 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:50Z is after 2026-02-23T05:33:13Z Mar 21 04:17:50 crc kubenswrapper[4923]: I0321 04:17:50.544017 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 21 04:17:50 crc kubenswrapper[4923]: I0321 04:17:50.544724 4923 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="929c4a6fd1d34acd75de2e0ae728dd6a6628aefce5eb1e246e2324b562b6ab5c" exitCode=255 Mar 21 04:17:50 crc kubenswrapper[4923]: I0321 04:17:50.544763 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"929c4a6fd1d34acd75de2e0ae728dd6a6628aefce5eb1e246e2324b562b6ab5c"} Mar 21 04:17:50 crc kubenswrapper[4923]: I0321 04:17:50.544812 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3fa2d1a0bdf87709c1d8c1c27da67657d8c08e34e65376cfafe0bff901d246ea"} Mar 21 04:17:50 crc kubenswrapper[4923]: I0321 04:17:50.544913 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:50 crc kubenswrapper[4923]: I0321 04:17:50.546114 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:50 crc kubenswrapper[4923]: I0321 04:17:50.546155 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:50 crc kubenswrapper[4923]: I0321 04:17:50.546167 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:50 crc kubenswrapper[4923]: I0321 04:17:50.547187 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 21 04:17:50 crc kubenswrapper[4923]: E0321 04:17:50.997256 4923 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:50Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189ec034cd9fc601 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.251186689 +0000 UTC m=+1.404197806,LastTimestamp:2026-03-21 04:17:16.251186689 +0000 UTC m=+1.404197806,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:17:51 crc kubenswrapper[4923]: I0321 04:17:51.260280 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:51Z is after 2026-02-23T05:33:13Z Mar 21 04:17:51 crc kubenswrapper[4923]: I0321 04:17:51.391804 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:51 crc kubenswrapper[4923]: E0321 04:17:51.392100 4923 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:51Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 21 04:17:51 crc kubenswrapper[4923]: I0321 04:17:51.393555 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:51 crc kubenswrapper[4923]: I0321 04:17:51.393632 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:51 crc kubenswrapper[4923]: I0321 04:17:51.393657 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:51 crc kubenswrapper[4923]: I0321 04:17:51.393708 4923 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 21 04:17:51 crc kubenswrapper[4923]: E0321 04:17:51.397191 4923 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:51Z is after 2026-02-23T05:33:13Z" node="crc" Mar 21 04:17:52 crc kubenswrapper[4923]: I0321 04:17:52.256876 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:52Z is after 2026-02-23T05:33:13Z Mar 21 04:17:53 crc kubenswrapper[4923]: I0321 04:17:53.258691 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:53Z is after 2026-02-23T05:33:13Z Mar 21 04:17:54 crc kubenswrapper[4923]: I0321 04:17:54.256187 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:54Z is after 2026-02-23T05:33:13Z Mar 21 04:17:54 crc kubenswrapper[4923]: I0321 04:17:54.739754 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 04:17:54 crc kubenswrapper[4923]: I0321 04:17:54.740023 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:54 crc kubenswrapper[4923]: I0321 04:17:54.741830 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:54 crc kubenswrapper[4923]: I0321 04:17:54.741904 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:54 crc kubenswrapper[4923]: I0321 04:17:54.741930 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:55 crc kubenswrapper[4923]: I0321 04:17:55.255768 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:55Z is after 2026-02-23T05:33:13Z Mar 21 04:17:56 crc kubenswrapper[4923]: I0321 04:17:56.141937 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:17:56 crc kubenswrapper[4923]: I0321 04:17:56.142737 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:56 crc kubenswrapper[4923]: I0321 04:17:56.144261 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:56 crc kubenswrapper[4923]: I0321 04:17:56.144370 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:56 crc kubenswrapper[4923]: I0321 04:17:56.144396 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:56 crc kubenswrapper[4923]: I0321 04:17:56.145176 4923 scope.go:117] "RemoveContainer" containerID="35be0f159b3e2b942a8f86191a9011e4072511e14ca6ae37ad5cb9e68768ca09" Mar 21 04:17:56 crc kubenswrapper[4923]: E0321 04:17:56.145492 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 21 04:17:56 crc kubenswrapper[4923]: I0321 04:17:56.256721 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:56Z is after 2026-02-23T05:33:13Z Mar 21 04:17:56 crc kubenswrapper[4923]: I0321 04:17:56.416559 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 04:17:56 crc kubenswrapper[4923]: I0321 04:17:56.416864 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:56 crc kubenswrapper[4923]: I0321 04:17:56.418622 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:56 crc kubenswrapper[4923]: I0321 04:17:56.418676 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:56 crc kubenswrapper[4923]: I0321 04:17:56.418689 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:56 crc kubenswrapper[4923]: E0321 04:17:56.449684 4923 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 21 04:17:56 crc kubenswrapper[4923]: W0321 04:17:56.593094 4923 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:56Z is after 2026-02-23T05:33:13Z Mar 21 04:17:56 crc kubenswrapper[4923]: E0321 04:17:56.593239 4923 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:56Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 21 04:17:56 crc kubenswrapper[4923]: I0321 04:17:56.795379 4923 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:17:56 crc kubenswrapper[4923]: I0321 04:17:56.795631 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:56 crc kubenswrapper[4923]: I0321 04:17:56.802242 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:56 crc kubenswrapper[4923]: I0321 04:17:56.802297 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:56 crc kubenswrapper[4923]: I0321 04:17:56.802315 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:56 crc kubenswrapper[4923]: I0321 04:17:56.803281 4923 scope.go:117] "RemoveContainer" containerID="35be0f159b3e2b942a8f86191a9011e4072511e14ca6ae37ad5cb9e68768ca09" Mar 21 04:17:56 crc kubenswrapper[4923]: E0321 04:17:56.803614 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 21 04:17:57 crc kubenswrapper[4923]: I0321 04:17:57.093493 4923 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 21 04:17:57 crc kubenswrapper[4923]: E0321 04:17:57.097638 4923 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:57Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 21 04:17:57 crc kubenswrapper[4923]: E0321 04:17:57.098874 4923 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Mar 21 04:17:57 crc kubenswrapper[4923]: I0321 04:17:57.255796 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:57Z is after 2026-02-23T05:33:13Z Mar 21 04:17:57 crc kubenswrapper[4923]: W0321 04:17:57.961873 4923 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:57Z is after 2026-02-23T05:33:13Z Mar 21 04:17:57 crc kubenswrapper[4923]: E0321 04:17:57.961999 4923 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:57Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 21 04:17:58 crc kubenswrapper[4923]: I0321 04:17:58.257512 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:58Z is after 2026-02-23T05:33:13Z Mar 21 04:17:58 crc kubenswrapper[4923]: E0321 04:17:58.397155 4923 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:58Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 21 04:17:58 crc kubenswrapper[4923]: I0321 04:17:58.398205 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:17:58 crc kubenswrapper[4923]: I0321 04:17:58.399864 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:17:58 crc kubenswrapper[4923]: I0321 04:17:58.399926 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:17:58 crc kubenswrapper[4923]: I0321 04:17:58.399950 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:17:58 crc kubenswrapper[4923]: I0321 04:17:58.399993 4923 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 21 04:17:58 crc kubenswrapper[4923]: E0321 04:17:58.403810 4923 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:58Z is after 2026-02-23T05:33:13Z" node="crc" Mar 21 04:17:59 crc kubenswrapper[4923]: I0321 04:17:59.257216 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:17:59Z is after 2026-02-23T05:33:13Z Mar 21 04:17:59 crc kubenswrapper[4923]: I0321 04:17:59.417096 4923 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 21 04:17:59 crc kubenswrapper[4923]: I0321 04:17:59.417215 4923 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 21 04:18:00 crc kubenswrapper[4923]: I0321 04:18:00.254374 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:00Z is after 2026-02-23T05:33:13Z Mar 21 04:18:01 crc kubenswrapper[4923]: E0321 04:18:01.003106 4923 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:01Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189ec034cd9fc601 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.251186689 +0000 UTC m=+1.404197806,LastTimestamp:2026-03-21 04:17:16.251186689 +0000 UTC m=+1.404197806,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:01 crc kubenswrapper[4923]: I0321 04:18:01.257291 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:01Z is after 2026-02-23T05:33:13Z Mar 21 04:18:02 crc kubenswrapper[4923]: I0321 04:18:02.256634 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:02Z is after 2026-02-23T05:33:13Z Mar 21 04:18:03 crc kubenswrapper[4923]: I0321 04:18:03.255802 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:03Z is after 2026-02-23T05:33:13Z Mar 21 04:18:04 crc kubenswrapper[4923]: I0321 04:18:04.255446 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:04Z is after 2026-02-23T05:33:13Z Mar 21 04:18:04 crc kubenswrapper[4923]: W0321 04:18:04.539244 4923 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:04Z is after 2026-02-23T05:33:13Z Mar 21 04:18:04 crc kubenswrapper[4923]: E0321 04:18:04.539403 4923 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:04Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 21 04:18:05 crc kubenswrapper[4923]: I0321 04:18:05.255073 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:05Z is after 2026-02-23T05:33:13Z Mar 21 04:18:05 crc kubenswrapper[4923]: E0321 04:18:05.400940 4923 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:05Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 21 04:18:05 crc kubenswrapper[4923]: I0321 04:18:05.404124 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:18:05 crc kubenswrapper[4923]: I0321 04:18:05.405487 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:05 crc kubenswrapper[4923]: I0321 04:18:05.405552 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:05 crc kubenswrapper[4923]: I0321 04:18:05.405567 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:05 crc kubenswrapper[4923]: I0321 04:18:05.405604 4923 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 21 04:18:05 crc kubenswrapper[4923]: E0321 04:18:05.408918 4923 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:05Z is after 2026-02-23T05:33:13Z" node="crc" Mar 21 04:18:06 crc kubenswrapper[4923]: W0321 04:18:06.100863 4923 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:06Z is after 2026-02-23T05:33:13Z Mar 21 04:18:06 crc kubenswrapper[4923]: E0321 04:18:06.101027 4923 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:06Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 21 04:18:06 crc kubenswrapper[4923]: I0321 04:18:06.254481 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:06Z is after 2026-02-23T05:33:13Z Mar 21 04:18:06 crc kubenswrapper[4923]: E0321 04:18:06.450716 4923 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 21 04:18:07 crc kubenswrapper[4923]: I0321 04:18:07.257266 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:07Z is after 2026-02-23T05:33:13Z Mar 21 04:18:08 crc kubenswrapper[4923]: I0321 04:18:08.255904 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:08Z is after 2026-02-23T05:33:13Z Mar 21 04:18:09 crc kubenswrapper[4923]: I0321 04:18:09.257742 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:09Z is after 2026-02-23T05:33:13Z Mar 21 04:18:09 crc kubenswrapper[4923]: I0321 04:18:09.417531 4923 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 21 04:18:09 crc kubenswrapper[4923]: I0321 04:18:09.417723 4923 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 21 04:18:09 crc kubenswrapper[4923]: I0321 04:18:09.700397 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 21 04:18:09 crc kubenswrapper[4923]: I0321 04:18:09.700810 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:18:09 crc kubenswrapper[4923]: I0321 04:18:09.702674 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:09 crc kubenswrapper[4923]: I0321 04:18:09.702742 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:09 crc kubenswrapper[4923]: I0321 04:18:09.702766 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:10 crc kubenswrapper[4923]: I0321 04:18:10.255477 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:10Z is after 2026-02-23T05:33:13Z Mar 21 04:18:11 crc kubenswrapper[4923]: E0321 04:18:11.007701 4923 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:11Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189ec034cd9fc601 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.251186689 +0000 UTC m=+1.404197806,LastTimestamp:2026-03-21 04:17:16.251186689 +0000 UTC m=+1.404197806,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:11 crc kubenswrapper[4923]: I0321 04:18:11.255790 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:11Z is after 2026-02-23T05:33:13Z Mar 21 04:18:11 crc kubenswrapper[4923]: I0321 04:18:11.357981 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:18:11 crc kubenswrapper[4923]: I0321 04:18:11.359263 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:11 crc kubenswrapper[4923]: I0321 04:18:11.359386 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:11 crc kubenswrapper[4923]: I0321 04:18:11.359424 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:11 crc kubenswrapper[4923]: I0321 04:18:11.360469 4923 scope.go:117] "RemoveContainer" containerID="35be0f159b3e2b942a8f86191a9011e4072511e14ca6ae37ad5cb9e68768ca09" Mar 21 04:18:11 crc kubenswrapper[4923]: I0321 04:18:11.617227 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 21 04:18:11 crc kubenswrapper[4923]: I0321 04:18:11.620769 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"255966e1808c105be31fb4f94cbc0d42750c3eed0dbb11cc0da41840834dbdec"} Mar 21 04:18:11 crc kubenswrapper[4923]: I0321 04:18:11.620972 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:18:11 crc kubenswrapper[4923]: I0321 04:18:11.622066 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:11 crc kubenswrapper[4923]: I0321 04:18:11.622110 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:11 crc kubenswrapper[4923]: I0321 04:18:11.622127 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:12 crc kubenswrapper[4923]: I0321 04:18:12.257201 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:12Z is after 2026-02-23T05:33:13Z Mar 21 04:18:12 crc kubenswrapper[4923]: E0321 04:18:12.405037 4923 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:12Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 21 04:18:12 crc kubenswrapper[4923]: I0321 04:18:12.409912 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:18:12 crc kubenswrapper[4923]: I0321 04:18:12.411713 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:12 crc kubenswrapper[4923]: I0321 04:18:12.411750 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:12 crc kubenswrapper[4923]: I0321 04:18:12.411762 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:12 crc kubenswrapper[4923]: I0321 04:18:12.411791 4923 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 21 04:18:12 crc kubenswrapper[4923]: E0321 04:18:12.414536 4923 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:12Z is after 2026-02-23T05:33:13Z" node="crc" Mar 21 04:18:12 crc kubenswrapper[4923]: I0321 04:18:12.627543 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 21 04:18:12 crc kubenswrapper[4923]: I0321 04:18:12.628789 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 21 04:18:12 crc kubenswrapper[4923]: I0321 04:18:12.631242 4923 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="255966e1808c105be31fb4f94cbc0d42750c3eed0dbb11cc0da41840834dbdec" exitCode=255 Mar 21 04:18:12 crc kubenswrapper[4923]: I0321 04:18:12.631294 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"255966e1808c105be31fb4f94cbc0d42750c3eed0dbb11cc0da41840834dbdec"} Mar 21 04:18:12 crc kubenswrapper[4923]: I0321 04:18:12.631357 4923 scope.go:117] "RemoveContainer" containerID="35be0f159b3e2b942a8f86191a9011e4072511e14ca6ae37ad5cb9e68768ca09" Mar 21 04:18:12 crc kubenswrapper[4923]: I0321 04:18:12.631508 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:18:12 crc kubenswrapper[4923]: I0321 04:18:12.632539 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:12 crc kubenswrapper[4923]: I0321 04:18:12.632587 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:12 crc kubenswrapper[4923]: I0321 04:18:12.632604 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:12 crc kubenswrapper[4923]: I0321 04:18:12.633186 4923 scope.go:117] "RemoveContainer" containerID="255966e1808c105be31fb4f94cbc0d42750c3eed0dbb11cc0da41840834dbdec" Mar 21 04:18:12 crc kubenswrapper[4923]: E0321 04:18:12.633459 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 21 04:18:13 crc kubenswrapper[4923]: I0321 04:18:13.254059 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:13Z is after 2026-02-23T05:33:13Z Mar 21 04:18:13 crc kubenswrapper[4923]: I0321 04:18:13.636040 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 21 04:18:14 crc kubenswrapper[4923]: I0321 04:18:14.258414 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:14Z is after 2026-02-23T05:33:13Z Mar 21 04:18:15 crc kubenswrapper[4923]: I0321 04:18:15.254866 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:15Z is after 2026-02-23T05:33:13Z Mar 21 04:18:16 crc kubenswrapper[4923]: I0321 04:18:16.141265 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:18:16 crc kubenswrapper[4923]: I0321 04:18:16.141486 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:18:16 crc kubenswrapper[4923]: I0321 04:18:16.142807 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:16 crc kubenswrapper[4923]: I0321 04:18:16.142863 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:16 crc kubenswrapper[4923]: I0321 04:18:16.142877 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:16 crc kubenswrapper[4923]: I0321 04:18:16.143534 4923 scope.go:117] "RemoveContainer" containerID="255966e1808c105be31fb4f94cbc0d42750c3eed0dbb11cc0da41840834dbdec" Mar 21 04:18:16 crc kubenswrapper[4923]: E0321 04:18:16.143737 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 21 04:18:16 crc kubenswrapper[4923]: I0321 04:18:16.261020 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 04:18:16 crc kubenswrapper[4923]: E0321 04:18:16.452372 4923 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 21 04:18:16 crc kubenswrapper[4923]: I0321 04:18:16.794891 4923 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:18:16 crc kubenswrapper[4923]: I0321 04:18:16.795056 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:18:16 crc kubenswrapper[4923]: I0321 04:18:16.796167 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:16 crc kubenswrapper[4923]: I0321 04:18:16.796243 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:16 crc kubenswrapper[4923]: I0321 04:18:16.796255 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:16 crc kubenswrapper[4923]: I0321 04:18:16.797135 4923 scope.go:117] "RemoveContainer" containerID="255966e1808c105be31fb4f94cbc0d42750c3eed0dbb11cc0da41840834dbdec" Mar 21 04:18:16 crc kubenswrapper[4923]: E0321 04:18:16.797380 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 21 04:18:17 crc kubenswrapper[4923]: I0321 04:18:17.257611 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 04:18:18 crc kubenswrapper[4923]: I0321 04:18:18.256381 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 04:18:19 crc kubenswrapper[4923]: I0321 04:18:19.257539 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 04:18:19 crc kubenswrapper[4923]: E0321 04:18:19.412643 4923 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 21 04:18:19 crc kubenswrapper[4923]: I0321 04:18:19.415617 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:18:19 crc kubenswrapper[4923]: I0321 04:18:19.416569 4923 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 21 04:18:19 crc kubenswrapper[4923]: I0321 04:18:19.416686 4923 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 21 04:18:19 crc kubenswrapper[4923]: I0321 04:18:19.416798 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 04:18:19 crc kubenswrapper[4923]: I0321 04:18:19.417071 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:18:19 crc kubenswrapper[4923]: I0321 04:18:19.417360 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:19 crc kubenswrapper[4923]: I0321 04:18:19.417431 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:19 crc kubenswrapper[4923]: I0321 04:18:19.417445 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:19 crc kubenswrapper[4923]: I0321 04:18:19.417474 4923 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 21 04:18:19 crc kubenswrapper[4923]: I0321 04:18:19.418835 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:19 crc kubenswrapper[4923]: I0321 04:18:19.418873 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:19 crc kubenswrapper[4923]: I0321 04:18:19.418883 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:19 crc kubenswrapper[4923]: I0321 04:18:19.419529 4923 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"3fa2d1a0bdf87709c1d8c1c27da67657d8c08e34e65376cfafe0bff901d246ea"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 21 04:18:19 crc kubenswrapper[4923]: I0321 04:18:19.419677 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://3fa2d1a0bdf87709c1d8c1c27da67657d8c08e34e65376cfafe0bff901d246ea" gracePeriod=30 Mar 21 04:18:19 crc kubenswrapper[4923]: E0321 04:18:19.424940 4923 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 21 04:18:19 crc kubenswrapper[4923]: I0321 04:18:19.658615 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 21 04:18:19 crc kubenswrapper[4923]: I0321 04:18:19.659811 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 21 04:18:19 crc kubenswrapper[4923]: I0321 04:18:19.660441 4923 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="3fa2d1a0bdf87709c1d8c1c27da67657d8c08e34e65376cfafe0bff901d246ea" exitCode=255 Mar 21 04:18:19 crc kubenswrapper[4923]: I0321 04:18:19.660499 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"3fa2d1a0bdf87709c1d8c1c27da67657d8c08e34e65376cfafe0bff901d246ea"} Mar 21 04:18:19 crc kubenswrapper[4923]: I0321 04:18:19.660558 4923 scope.go:117] "RemoveContainer" containerID="929c4a6fd1d34acd75de2e0ae728dd6a6628aefce5eb1e246e2324b562b6ab5c" Mar 21 04:18:20 crc kubenswrapper[4923]: I0321 04:18:20.259361 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 04:18:20 crc kubenswrapper[4923]: I0321 04:18:20.666111 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 21 04:18:20 crc kubenswrapper[4923]: I0321 04:18:20.667760 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"450a94ec5b2076c8e45af1d0034f5694e6e828fbd2870a28120a6cec6c71a337"} Mar 21 04:18:20 crc kubenswrapper[4923]: I0321 04:18:20.667907 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:18:20 crc kubenswrapper[4923]: I0321 04:18:20.669008 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:20 crc kubenswrapper[4923]: I0321 04:18:20.669042 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:20 crc kubenswrapper[4923]: I0321 04:18:20.669052 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.013473 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ec034cd9fc601 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.251186689 +0000 UTC m=+1.404197806,LastTimestamp:2026-03-21 04:17:16.251186689 +0000 UTC m=+1.404197806,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.017888 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ec034d1fe7ac7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.324502215 +0000 UTC m=+1.477513292,LastTimestamp:2026-03-21 04:17:16.324502215 +0000 UTC m=+1.477513292,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.022032 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ec034d1fee08a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.324528266 +0000 UTC m=+1.477539353,LastTimestamp:2026-03-21 04:17:16.324528266 +0000 UTC m=+1.477539353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.025766 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ec034d1ff0664 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.324537956 +0000 UTC m=+1.477549033,LastTimestamp:2026-03-21 04:17:16.324537956 +0000 UTC m=+1.477549033,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.029261 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ec034d91e70f8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.444037368 +0000 UTC m=+1.597048465,LastTimestamp:2026-03-21 04:17:16.444037368 +0000 UTC m=+1.597048465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.033507 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ec034d1fe7ac7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ec034d1fe7ac7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.324502215 +0000 UTC m=+1.477513292,LastTimestamp:2026-03-21 04:17:16.469675397 +0000 UTC m=+1.622686484,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.037581 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ec034d1fee08a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ec034d1fee08a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.324528266 +0000 UTC m=+1.477539353,LastTimestamp:2026-03-21 04:17:16.469692287 +0000 UTC m=+1.622703374,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.041345 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ec034d1ff0664\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ec034d1ff0664 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.324537956 +0000 UTC m=+1.477549033,LastTimestamp:2026-03-21 04:17:16.469701307 +0000 UTC m=+1.622712394,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.047310 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ec034d1fe7ac7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ec034d1fe7ac7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.324502215 +0000 UTC m=+1.477513292,LastTimestamp:2026-03-21 04:17:16.470679315 +0000 UTC m=+1.623690412,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.051525 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ec034d1fee08a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ec034d1fee08a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.324528266 +0000 UTC m=+1.477539353,LastTimestamp:2026-03-21 04:17:16.470711016 +0000 UTC m=+1.623722113,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.055926 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ec034d1ff0664\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ec034d1ff0664 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.324537956 +0000 UTC m=+1.477549033,LastTimestamp:2026-03-21 04:17:16.470725506 +0000 UTC m=+1.623736603,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.059849 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ec034d1fe7ac7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ec034d1fe7ac7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.324502215 +0000 UTC m=+1.477513292,LastTimestamp:2026-03-21 04:17:16.471711524 +0000 UTC m=+1.624722621,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.063565 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ec034d1fee08a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ec034d1fee08a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.324528266 +0000 UTC m=+1.477539353,LastTimestamp:2026-03-21 04:17:16.471736125 +0000 UTC m=+1.624747222,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.069033 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ec034d1ff0664\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ec034d1ff0664 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.324537956 +0000 UTC m=+1.477549033,LastTimestamp:2026-03-21 04:17:16.471751826 +0000 UTC m=+1.624762923,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.073650 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ec034d1fe7ac7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ec034d1fe7ac7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.324502215 +0000 UTC m=+1.477513292,LastTimestamp:2026-03-21 04:17:16.472041574 +0000 UTC m=+1.625052661,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.077697 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ec034d1fee08a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ec034d1fee08a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.324528266 +0000 UTC m=+1.477539353,LastTimestamp:2026-03-21 04:17:16.472051664 +0000 UTC m=+1.625062741,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.080830 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ec034d1ff0664\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ec034d1ff0664 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.324537956 +0000 UTC m=+1.477549033,LastTimestamp:2026-03-21 04:17:16.472062354 +0000 UTC m=+1.625073441,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.084711 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ec034d1fe7ac7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ec034d1fe7ac7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.324502215 +0000 UTC m=+1.477513292,LastTimestamp:2026-03-21 04:17:16.472408934 +0000 UTC m=+1.625420061,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.089229 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ec034d1fee08a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ec034d1fee08a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.324528266 +0000 UTC m=+1.477539353,LastTimestamp:2026-03-21 04:17:16.472436035 +0000 UTC m=+1.625447162,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.092900 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ec034d1ff0664\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ec034d1ff0664 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.324537956 +0000 UTC m=+1.477549033,LastTimestamp:2026-03-21 04:17:16.472456176 +0000 UTC m=+1.625467303,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.096431 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ec034d1fe7ac7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ec034d1fe7ac7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.324502215 +0000 UTC m=+1.477513292,LastTimestamp:2026-03-21 04:17:16.473214217 +0000 UTC m=+1.626225344,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.101031 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ec034d1fee08a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ec034d1fee08a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.324528266 +0000 UTC m=+1.477539353,LastTimestamp:2026-03-21 04:17:16.473235458 +0000 UTC m=+1.626246585,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.104956 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ec034d1ff0664\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ec034d1ff0664 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.324537956 +0000 UTC m=+1.477549033,LastTimestamp:2026-03-21 04:17:16.473252818 +0000 UTC m=+1.626263945,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.108691 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ec034d1fe7ac7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ec034d1fe7ac7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.324502215 +0000 UTC m=+1.477513292,LastTimestamp:2026-03-21 04:17:16.473362811 +0000 UTC m=+1.626373898,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.113024 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ec034d1fee08a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ec034d1fee08a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.324528266 +0000 UTC m=+1.477539353,LastTimestamp:2026-03-21 04:17:16.473383862 +0000 UTC m=+1.626394949,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.117820 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ec034f3ae626c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.889678444 +0000 UTC m=+2.042689551,LastTimestamp:2026-03-21 04:17:16.889678444 +0000 UTC m=+2.042689551,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.125052 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ec034f3b12871 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.889860209 +0000 UTC m=+2.042871336,LastTimestamp:2026-03-21 04:17:16.889860209 +0000 UTC m=+2.042871336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.129164 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ec034f407a26a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.89552753 +0000 UTC m=+2.048538617,LastTimestamp:2026-03-21 04:17:16.89552753 +0000 UTC m=+2.048538617,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.134173 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ec034f445a034 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.899590196 +0000 UTC m=+2.052601293,LastTimestamp:2026-03-21 04:17:16.899590196 +0000 UTC m=+2.052601293,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.137829 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ec034f4c21b84 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:16.907748228 +0000 UTC m=+2.060759315,LastTimestamp:2026-03-21 04:17:16.907748228 +0000 UTC m=+2.060759315,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.142651 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ec0352bb660cc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:17.829726412 +0000 UTC m=+2.982737499,LastTimestamp:2026-03-21 04:17:17.829726412 +0000 UTC m=+2.982737499,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.146464 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ec0352bbf4667 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:17.830309479 +0000 UTC m=+2.983320566,LastTimestamp:2026-03-21 04:17:17.830309479 +0000 UTC m=+2.983320566,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.151245 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ec0352bc195dd openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:17.830460893 +0000 UTC m=+2.983471980,LastTimestamp:2026-03-21 04:17:17.830460893 +0000 UTC m=+2.983471980,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.155490 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ec0352c38c4c5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:17.838271685 +0000 UTC m=+2.991282772,LastTimestamp:2026-03-21 04:17:17.838271685 +0000 UTC m=+2.991282772,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.159453 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ec0352c916ad0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:17.84408136 +0000 UTC m=+2.997092437,LastTimestamp:2026-03-21 04:17:17.84408136 +0000 UTC m=+2.997092437,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.163135 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ec0352c93e201 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:17.844242945 +0000 UTC m=+2.997254032,LastTimestamp:2026-03-21 04:17:17.844242945 +0000 UTC m=+2.997254032,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.169301 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ec0352c94e3e7 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:17.844308967 +0000 UTC m=+2.997320064,LastTimestamp:2026-03-21 04:17:17.844308967 +0000 UTC m=+2.997320064,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.172860 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ec0352c9507d7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:17.844318167 +0000 UTC m=+2.997329264,LastTimestamp:2026-03-21 04:17:17.844318167 +0000 UTC m=+2.997329264,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.176291 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ec0352cb13f22 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:17.84616733 +0000 UTC m=+2.999178407,LastTimestamp:2026-03-21 04:17:17.84616733 +0000 UTC m=+2.999178407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.179912 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ec0352d19a9ee openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:17.853010414 +0000 UTC m=+3.006021501,LastTimestamp:2026-03-21 04:17:17.853010414 +0000 UTC m=+3.006021501,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.184964 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ec0352df25856 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:17.867210838 +0000 UTC m=+3.020221935,LastTimestamp:2026-03-21 04:17:17.867210838 +0000 UTC m=+3.020221935,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.188727 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ec03540392106 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:18.173839622 +0000 UTC m=+3.326850709,LastTimestamp:2026-03-21 04:17:18.173839622 +0000 UTC m=+3.326850709,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.192097 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ec0354148f93c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:18.191655228 +0000 UTC m=+3.344666315,LastTimestamp:2026-03-21 04:17:18.191655228 +0000 UTC m=+3.344666315,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.198479 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ec035415c8003 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:18.192934915 +0000 UTC m=+3.345946052,LastTimestamp:2026-03-21 04:17:18.192934915 +0000 UTC m=+3.345946052,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.202146 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ec0354cb4a51a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:18.383260954 +0000 UTC m=+3.536272051,LastTimestamp:2026-03-21 04:17:18.383260954 +0000 UTC m=+3.536272051,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.205724 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ec0354cd6d5d0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:18.385501648 +0000 UTC m=+3.538512745,LastTimestamp:2026-03-21 04:17:18.385501648 +0000 UTC m=+3.538512745,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.209772 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ec0354cec5bee openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:18.386912238 +0000 UTC m=+3.539923335,LastTimestamp:2026-03-21 04:17:18.386912238 +0000 UTC m=+3.539923335,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.213415 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ec0354d1ee0a4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:18.390223012 +0000 UTC m=+3.543234109,LastTimestamp:2026-03-21 04:17:18.390223012 +0000 UTC m=+3.543234109,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.216539 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ec035558e760c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:18.531753484 +0000 UTC m=+3.684764581,LastTimestamp:2026-03-21 04:17:18.531753484 +0000 UTC m=+3.684764581,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.220605 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ec0355650ba5c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:18.544484956 +0000 UTC m=+3.697496053,LastTimestamp:2026-03-21 04:17:18.544484956 +0000 UTC m=+3.697496053,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.223812 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ec035566c6876 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:18.546298998 +0000 UTC m=+3.699310095,LastTimestamp:2026-03-21 04:17:18.546298998 +0000 UTC m=+3.699310095,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.226931 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ec0355b481026 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:18.627803174 +0000 UTC m=+3.780814271,LastTimestamp:2026-03-21 04:17:18.627803174 +0000 UTC m=+3.780814271,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.230526 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ec0355c0e6e7e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:18.640803454 +0000 UTC m=+3.793814551,LastTimestamp:2026-03-21 04:17:18.640803454 +0000 UTC m=+3.793814551,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.233657 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ec0355c3eb7dc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:18.643967964 +0000 UTC m=+3.796979061,LastTimestamp:2026-03-21 04:17:18.643967964 +0000 UTC m=+3.796979061,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.239028 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ec0355dd5a9e6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:18.670637542 +0000 UTC m=+3.823648639,LastTimestamp:2026-03-21 04:17:18.670637542 +0000 UTC m=+3.823648639,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.243984 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ec0355e51fb41 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:18.678784833 +0000 UTC m=+3.831795930,LastTimestamp:2026-03-21 04:17:18.678784833 +0000 UTC m=+3.831795930,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.248253 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ec0355e6162d8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:18.679794392 +0000 UTC m=+3.832805479,LastTimestamp:2026-03-21 04:17:18.679794392 +0000 UTC m=+3.832805479,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.254917 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ec0355edc5e65 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:18.687854181 +0000 UTC m=+3.840865268,LastTimestamp:2026-03-21 04:17:18.687854181 +0000 UTC m=+3.840865268,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: I0321 04:18:21.255768 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.257854 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ec0356018639a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:18.70856489 +0000 UTC m=+3.861575997,LastTimestamp:2026-03-21 04:17:18.70856489 +0000 UTC m=+3.861575997,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.259597 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ec0356029f3d3 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:18.709715923 +0000 UTC m=+3.862727010,LastTimestamp:2026-03-21 04:17:18.709715923 +0000 UTC m=+3.862727010,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.265602 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ec035623a40af openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:18.744338607 +0000 UTC m=+3.897349694,LastTimestamp:2026-03-21 04:17:18.744338607 +0000 UTC m=+3.897349694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.269887 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ec035641abbc0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:18.775827392 +0000 UTC m=+3.928838479,LastTimestamp:2026-03-21 04:17:18.775827392 +0000 UTC m=+3.928838479,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.274256 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ec0356cb6a717 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:18.920263447 +0000 UTC m=+4.073274554,LastTimestamp:2026-03-21 04:17:18.920263447 +0000 UTC m=+4.073274554,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.279598 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ec0356d0add52 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:18.925782354 +0000 UTC m=+4.078793441,LastTimestamp:2026-03-21 04:17:18.925782354 +0000 UTC m=+4.078793441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.284173 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ec0356e05e6d1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:18.942234321 +0000 UTC m=+4.095245408,LastTimestamp:2026-03-21 04:17:18.942234321 +0000 UTC m=+4.095245408,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.290393 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ec0356e14c929 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:18.943209769 +0000 UTC m=+4.096220866,LastTimestamp:2026-03-21 04:17:18.943209769 +0000 UTC m=+4.096220866,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.295233 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ec0356e78c169 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:18.949761385 +0000 UTC m=+4.102772482,LastTimestamp:2026-03-21 04:17:18.949761385 +0000 UTC m=+4.102772482,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.299091 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ec0356e921d8c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:18.951423372 +0000 UTC m=+4.104434459,LastTimestamp:2026-03-21 04:17:18.951423372 +0000 UTC m=+4.104434459,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.302450 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ec0357a03b43a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:19.14341689 +0000 UTC m=+4.296427997,LastTimestamp:2026-03-21 04:17:19.14341689 +0000 UTC m=+4.296427997,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.306178 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ec0357a626be8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:19.149624296 +0000 UTC m=+4.302635393,LastTimestamp:2026-03-21 04:17:19.149624296 +0000 UTC m=+4.302635393,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.309962 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ec0357b596a7e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:19.165811326 +0000 UTC m=+4.318822413,LastTimestamp:2026-03-21 04:17:19.165811326 +0000 UTC m=+4.318822413,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.313901 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ec0357bc4b950 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:19.172843856 +0000 UTC m=+4.325854943,LastTimestamp:2026-03-21 04:17:19.172843856 +0000 UTC m=+4.325854943,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.318357 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ec0357bd2e12a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:19.173771562 +0000 UTC m=+4.326782649,LastTimestamp:2026-03-21 04:17:19.173771562 +0000 UTC m=+4.326782649,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.322690 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ec0357c51c1d1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:19.182086609 +0000 UTC m=+4.335097696,LastTimestamp:2026-03-21 04:17:19.182086609 +0000 UTC m=+4.335097696,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.326734 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ec035898d9299 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:19.404110489 +0000 UTC m=+4.557121586,LastTimestamp:2026-03-21 04:17:19.404110489 +0000 UTC m=+4.557121586,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.330858 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ec035899d8979 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:19.405156729 +0000 UTC m=+4.558167816,LastTimestamp:2026-03-21 04:17:19.405156729 +0000 UTC m=+4.558167816,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.335432 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ec0358b09ccb7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:19.429029047 +0000 UTC m=+4.582040134,LastTimestamp:2026-03-21 04:17:19.429029047 +0000 UTC m=+4.582040134,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.340294 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ec0358b33525b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:19.431750235 +0000 UTC m=+4.584761342,LastTimestamp:2026-03-21 04:17:19.431750235 +0000 UTC m=+4.584761342,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.346146 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ec03596706572 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:19.620302194 +0000 UTC m=+4.773313291,LastTimestamp:2026-03-21 04:17:19.620302194 +0000 UTC m=+4.773313291,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.350429 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ec03596a0e8a0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:19.623481504 +0000 UTC m=+4.776492591,LastTimestamp:2026-03-21 04:17:19.623481504 +0000 UTC m=+4.776492591,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.354678 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ec03597892361 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:19.638700897 +0000 UTC m=+4.791711994,LastTimestamp:2026-03-21 04:17:19.638700897 +0000 UTC m=+4.791711994,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.361180 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ec03597e1ba3a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:19.644506682 +0000 UTC m=+4.797517769,LastTimestamp:2026-03-21 04:17:19.644506682 +0000 UTC m=+4.797517769,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.365717 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ec035c5e3230d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:20.416350989 +0000 UTC m=+5.569362086,LastTimestamp:2026-03-21 04:17:20.416350989 +0000 UTC m=+5.569362086,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.368086 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ec035d7e523be openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:20.718472126 +0000 UTC m=+5.871483213,LastTimestamp:2026-03-21 04:17:20.718472126 +0000 UTC m=+5.871483213,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.371913 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ec035d892fb5a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:20.72986505 +0000 UTC m=+5.882876137,LastTimestamp:2026-03-21 04:17:20.72986505 +0000 UTC m=+5.882876137,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.377712 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ec035d8a3546e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:20.73093643 +0000 UTC m=+5.883947517,LastTimestamp:2026-03-21 04:17:20.73093643 +0000 UTC m=+5.883947517,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.383416 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ec035e6a7a514 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:20.966100244 +0000 UTC m=+6.119111351,LastTimestamp:2026-03-21 04:17:20.966100244 +0000 UTC m=+6.119111351,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.389874 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ec035e7851811 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:20.980613137 +0000 UTC m=+6.133624234,LastTimestamp:2026-03-21 04:17:20.980613137 +0000 UTC m=+6.133624234,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.395933 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ec035e7941409 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:20.981595145 +0000 UTC m=+6.134606232,LastTimestamp:2026-03-21 04:17:20.981595145 +0000 UTC m=+6.134606232,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.402417 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ec035f286d9b9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:21.165277625 +0000 UTC m=+6.318288712,LastTimestamp:2026-03-21 04:17:21.165277625 +0000 UTC m=+6.318288712,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.408704 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ec035f3d45496 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:21.187132566 +0000 UTC m=+6.340143653,LastTimestamp:2026-03-21 04:17:21.187132566 +0000 UTC m=+6.340143653,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.414841 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ec035f3e94265 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:21.188504165 +0000 UTC m=+6.341515242,LastTimestamp:2026-03-21 04:17:21.188504165 +0000 UTC m=+6.341515242,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.419922 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ec03601761b17 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:21.415838487 +0000 UTC m=+6.568849584,LastTimestamp:2026-03-21 04:17:21.415838487 +0000 UTC m=+6.568849584,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.424864 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189ec0358b33525b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ec0358b33525b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:19.431750235 +0000 UTC m=+4.584761342,LastTimestamp:2026-03-21 04:17:21.424556384 +0000 UTC m=+6.577567502,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.429640 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ec03602641856 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:21.43143535 +0000 UTC m=+6.584446457,LastTimestamp:2026-03-21 04:17:21.43143535 +0000 UTC m=+6.584446457,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.434119 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ec03602798b6e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:21.43284107 +0000 UTC m=+6.585852157,LastTimestamp:2026-03-21 04:17:21.43284107 +0000 UTC m=+6.585852157,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.437769 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189ec03596a0e8a0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ec03596a0e8a0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:19.623481504 +0000 UTC m=+4.776492591,LastTimestamp:2026-03-21 04:17:21.602886063 +0000 UTC m=+6.755897150,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.442062 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ec0360cdcacb5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:21.607109813 +0000 UTC m=+6.760120900,LastTimestamp:2026-03-21 04:17:21.607109813 +0000 UTC m=+6.760120900,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.446610 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189ec03597e1ba3a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ec03597e1ba3a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:19.644506682 +0000 UTC m=+4.797517769,LastTimestamp:2026-03-21 04:17:21.61508274 +0000 UTC m=+6.768093827,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.451281 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ec0360e5bae9a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:21.632210586 +0000 UTC m=+6.785221673,LastTimestamp:2026-03-21 04:17:21.632210586 +0000 UTC m=+6.785221673,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.458128 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 21 04:18:21 crc kubenswrapper[4923]: &Event{ObjectMeta:{kube-controller-manager-crc.189ec037de5aef1a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 21 04:18:21 crc kubenswrapper[4923]: body: Mar 21 04:18:21 crc kubenswrapper[4923]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:29.416789786 +0000 UTC m=+14.569800953,LastTimestamp:2026-03-21 04:17:29.416789786 +0000 UTC m=+14.569800953,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 21 04:18:21 crc kubenswrapper[4923]: > Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.462707 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ec037de5e3356 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:29.417003862 +0000 UTC m=+14.570014989,LastTimestamp:2026-03-21 04:17:29.417003862 +0000 UTC m=+14.570014989,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.466371 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 21 04:18:21 crc kubenswrapper[4923]: &Event{ObjectMeta:{kube-apiserver-crc.189ec0383c0822e4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 21 04:18:21 crc kubenswrapper[4923]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Mar 21 04:18:21 crc kubenswrapper[4923]: Mar 21 04:18:21 crc kubenswrapper[4923]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:30.98842186 +0000 UTC m=+16.141432947,LastTimestamp:2026-03-21 04:17:30.98842186 +0000 UTC m=+16.141432947,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 21 04:18:21 crc kubenswrapper[4923]: > Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.470799 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ec0383c090237 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:30.988479031 +0000 UTC m=+16.141490118,LastTimestamp:2026-03-21 04:17:30.988479031 +0000 UTC m=+16.141490118,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.479282 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189ec0383c0822e4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 21 04:18:21 crc kubenswrapper[4923]: &Event{ObjectMeta:{kube-apiserver-crc.189ec0383c0822e4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 21 04:18:21 crc kubenswrapper[4923]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Mar 21 04:18:21 crc kubenswrapper[4923]: Mar 21 04:18:21 crc kubenswrapper[4923]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:30.98842186 +0000 UTC m=+16.141432947,LastTimestamp:2026-03-21 04:17:30.993406176 +0000 UTC m=+16.146417253,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 21 04:18:21 crc kubenswrapper[4923]: > Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.483155 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189ec0383c090237\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ec0383c090237 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:30.988479031 +0000 UTC m=+16.141490118,LastTimestamp:2026-03-21 04:17:30.993448917 +0000 UTC m=+16.146460004,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.489484 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 21 04:18:21 crc kubenswrapper[4923]: &Event{ObjectMeta:{kube-controller-manager-crc.189ec03a326e2fd8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 21 04:18:21 crc kubenswrapper[4923]: body: Mar 21 04:18:21 crc kubenswrapper[4923]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:39.41727228 +0000 UTC m=+24.570283377,LastTimestamp:2026-03-21 04:17:39.41727228 +0000 UTC m=+24.570283377,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 21 04:18:21 crc kubenswrapper[4923]: > Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.493736 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ec03a326f7c6f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:39.417357423 +0000 UTC m=+24.570368520,LastTimestamp:2026-03-21 04:17:39.417357423 +0000 UTC m=+24.570368520,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.500826 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ec03a326e2fd8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 21 04:18:21 crc kubenswrapper[4923]: &Event{ObjectMeta:{kube-controller-manager-crc.189ec03a326e2fd8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 21 04:18:21 crc kubenswrapper[4923]: body: Mar 21 04:18:21 crc kubenswrapper[4923]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:39.41727228 +0000 UTC m=+24.570283377,LastTimestamp:2026-03-21 04:17:49.416512978 +0000 UTC m=+34.569524085,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 21 04:18:21 crc kubenswrapper[4923]: > Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.506787 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ec03a326f7c6f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ec03a326f7c6f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:39.417357423 +0000 UTC m=+24.570368520,LastTimestamp:2026-03-21 04:17:49.41658367 +0000 UTC m=+34.569594767,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.510788 4923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ec03c86affe48 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:49.420805704 +0000 UTC m=+34.573816801,LastTimestamp:2026-03-21 04:17:49.420805704 +0000 UTC m=+34.573816801,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.514741 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ec0352cb13f22\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ec0352cb13f22 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:17.84616733 +0000 UTC m=+2.999178407,LastTimestamp:2026-03-21 04:17:49.549252694 +0000 UTC m=+34.702263821,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.518010 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ec03540392106\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ec03540392106 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:18.173839622 +0000 UTC m=+3.326850709,LastTimestamp:2026-03-21 04:17:49.761551912 +0000 UTC m=+34.914563039,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.521997 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ec0354148f93c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ec0354148f93c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:18.191655228 +0000 UTC m=+3.344666315,LastTimestamp:2026-03-21 04:17:49.796221243 +0000 UTC m=+34.949232360,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.528417 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ec03a326e2fd8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 21 04:18:21 crc kubenswrapper[4923]: &Event{ObjectMeta:{kube-controller-manager-crc.189ec03a326e2fd8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 21 04:18:21 crc kubenswrapper[4923]: body: Mar 21 04:18:21 crc kubenswrapper[4923]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:39.41727228 +0000 UTC m=+24.570283377,LastTimestamp:2026-03-21 04:17:59.417177307 +0000 UTC m=+44.570188404,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 21 04:18:21 crc kubenswrapper[4923]: > Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.532474 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ec03a326f7c6f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ec03a326f7c6f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:39.417357423 +0000 UTC m=+24.570368520,LastTimestamp:2026-03-21 04:17:59.417245129 +0000 UTC m=+44.570256226,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:18:21 crc kubenswrapper[4923]: E0321 04:18:21.536419 4923 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ec03a326e2fd8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 21 04:18:21 crc kubenswrapper[4923]: &Event{ObjectMeta:{kube-controller-manager-crc.189ec03a326e2fd8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 21 04:18:21 crc kubenswrapper[4923]: body: Mar 21 04:18:21 crc kubenswrapper[4923]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:17:39.41727228 +0000 UTC m=+24.570283377,LastTimestamp:2026-03-21 04:18:09.41767109 +0000 UTC m=+54.570682267,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 21 04:18:21 crc kubenswrapper[4923]: > Mar 21 04:18:21 crc kubenswrapper[4923]: I0321 04:18:21.670575 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:18:21 crc kubenswrapper[4923]: I0321 04:18:21.671850 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:21 crc kubenswrapper[4923]: I0321 04:18:21.671909 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:21 crc kubenswrapper[4923]: I0321 04:18:21.671932 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:22 crc kubenswrapper[4923]: I0321 04:18:22.255862 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 04:18:23 crc kubenswrapper[4923]: I0321 04:18:23.257069 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 04:18:24 crc kubenswrapper[4923]: I0321 04:18:24.257759 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 04:18:24 crc kubenswrapper[4923]: I0321 04:18:24.739403 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 04:18:24 crc kubenswrapper[4923]: I0321 04:18:24.739813 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:18:24 crc kubenswrapper[4923]: I0321 04:18:24.740885 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:24 crc kubenswrapper[4923]: I0321 04:18:24.740920 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:24 crc kubenswrapper[4923]: I0321 04:18:24.740934 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:25 crc kubenswrapper[4923]: I0321 04:18:25.255985 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 04:18:26 crc kubenswrapper[4923]: I0321 04:18:26.258791 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 04:18:26 crc kubenswrapper[4923]: I0321 04:18:26.416961 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 04:18:26 crc kubenswrapper[4923]: I0321 04:18:26.417134 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:18:26 crc kubenswrapper[4923]: I0321 04:18:26.418122 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:26 crc kubenswrapper[4923]: I0321 04:18:26.418155 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:26 crc kubenswrapper[4923]: I0321 04:18:26.418166 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:26 crc kubenswrapper[4923]: E0321 04:18:26.418388 4923 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 21 04:18:26 crc kubenswrapper[4923]: I0321 04:18:26.424973 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 04:18:26 crc kubenswrapper[4923]: I0321 04:18:26.425058 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:18:26 crc kubenswrapper[4923]: I0321 04:18:26.426580 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:26 crc kubenswrapper[4923]: I0321 04:18:26.426639 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:26 crc kubenswrapper[4923]: I0321 04:18:26.426660 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:26 crc kubenswrapper[4923]: I0321 04:18:26.426692 4923 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 21 04:18:26 crc kubenswrapper[4923]: E0321 04:18:26.431792 4923 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 21 04:18:26 crc kubenswrapper[4923]: E0321 04:18:26.453106 4923 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 21 04:18:26 crc kubenswrapper[4923]: I0321 04:18:26.681500 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:18:26 crc kubenswrapper[4923]: I0321 04:18:26.682297 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:26 crc kubenswrapper[4923]: I0321 04:18:26.682349 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:26 crc kubenswrapper[4923]: I0321 04:18:26.682364 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:27 crc kubenswrapper[4923]: I0321 04:18:27.260936 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 04:18:28 crc kubenswrapper[4923]: I0321 04:18:28.255183 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 04:18:29 crc kubenswrapper[4923]: I0321 04:18:29.100299 4923 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 21 04:18:29 crc kubenswrapper[4923]: I0321 04:18:29.112842 4923 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 21 04:18:29 crc kubenswrapper[4923]: I0321 04:18:29.255339 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 04:18:30 crc kubenswrapper[4923]: I0321 04:18:30.257180 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 04:18:31 crc kubenswrapper[4923]: I0321 04:18:31.255723 4923 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 21 04:18:31 crc kubenswrapper[4923]: I0321 04:18:31.358456 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:18:31 crc kubenswrapper[4923]: I0321 04:18:31.359624 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:31 crc kubenswrapper[4923]: I0321 04:18:31.359675 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:31 crc kubenswrapper[4923]: I0321 04:18:31.359687 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:31 crc kubenswrapper[4923]: I0321 04:18:31.360217 4923 scope.go:117] "RemoveContainer" containerID="255966e1808c105be31fb4f94cbc0d42750c3eed0dbb11cc0da41840834dbdec" Mar 21 04:18:31 crc kubenswrapper[4923]: E0321 04:18:31.360415 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 21 04:18:31 crc kubenswrapper[4923]: W0321 04:18:31.669692 4923 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 21 04:18:31 crc kubenswrapper[4923]: E0321 04:18:31.669758 4923 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 21 04:18:31 crc kubenswrapper[4923]: I0321 04:18:31.699853 4923 csr.go:261] certificate signing request csr-bgjz5 is approved, waiting to be issued Mar 21 04:18:31 crc kubenswrapper[4923]: I0321 04:18:31.707526 4923 csr.go:257] certificate signing request csr-bgjz5 is issued Mar 21 04:18:31 crc kubenswrapper[4923]: I0321 04:18:31.767520 4923 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 21 04:18:31 crc kubenswrapper[4923]: I0321 04:18:31.936846 4923 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 21 04:18:32 crc kubenswrapper[4923]: I0321 04:18:32.709212 4923 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-24 02:41:23.822607308 +0000 UTC Mar 21 04:18:32 crc kubenswrapper[4923]: I0321 04:18:32.709260 4923 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 5950h22m51.113351684s for next certificate rotation Mar 21 04:18:33 crc kubenswrapper[4923]: I0321 04:18:33.432529 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:18:33 crc kubenswrapper[4923]: I0321 04:18:33.433886 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:33 crc kubenswrapper[4923]: I0321 04:18:33.433955 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:33 crc kubenswrapper[4923]: I0321 04:18:33.433978 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:33 crc kubenswrapper[4923]: I0321 04:18:33.434161 4923 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 21 04:18:33 crc kubenswrapper[4923]: I0321 04:18:33.445160 4923 kubelet_node_status.go:115] "Node was previously registered" node="crc" Mar 21 04:18:33 crc kubenswrapper[4923]: I0321 04:18:33.445591 4923 kubelet_node_status.go:79] "Successfully registered node" node="crc" Mar 21 04:18:33 crc kubenswrapper[4923]: E0321 04:18:33.445917 4923 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 21 04:18:33 crc kubenswrapper[4923]: I0321 04:18:33.450676 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:33 crc kubenswrapper[4923]: I0321 04:18:33.450723 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:33 crc kubenswrapper[4923]: I0321 04:18:33.450737 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:33 crc kubenswrapper[4923]: I0321 04:18:33.450755 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:33 crc kubenswrapper[4923]: I0321 04:18:33.450769 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:33Z","lastTransitionTime":"2026-03-21T04:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:33 crc kubenswrapper[4923]: E0321 04:18:33.469257 4923 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cf8d7db8-ca59-4aa1-a66b-77ca41867327\\\",\\\"systemUUID\\\":\\\"cc949092-1947-409d-b7e5-479c6a1f7b47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 04:18:33 crc kubenswrapper[4923]: I0321 04:18:33.483456 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:33 crc kubenswrapper[4923]: I0321 04:18:33.483500 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:33 crc kubenswrapper[4923]: I0321 04:18:33.483511 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:33 crc kubenswrapper[4923]: I0321 04:18:33.483527 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:33 crc kubenswrapper[4923]: I0321 04:18:33.483540 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:33Z","lastTransitionTime":"2026-03-21T04:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:33 crc kubenswrapper[4923]: E0321 04:18:33.496202 4923 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cf8d7db8-ca59-4aa1-a66b-77ca41867327\\\",\\\"systemUUID\\\":\\\"cc949092-1947-409d-b7e5-479c6a1f7b47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 04:18:33 crc kubenswrapper[4923]: I0321 04:18:33.506583 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:33 crc kubenswrapper[4923]: I0321 04:18:33.506635 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:33 crc kubenswrapper[4923]: I0321 04:18:33.506650 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:33 crc kubenswrapper[4923]: I0321 04:18:33.506669 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:33 crc kubenswrapper[4923]: I0321 04:18:33.506683 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:33Z","lastTransitionTime":"2026-03-21T04:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:33 crc kubenswrapper[4923]: E0321 04:18:33.524385 4923 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cf8d7db8-ca59-4aa1-a66b-77ca41867327\\\",\\\"systemUUID\\\":\\\"cc949092-1947-409d-b7e5-479c6a1f7b47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 04:18:33 crc kubenswrapper[4923]: I0321 04:18:33.537649 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:33 crc kubenswrapper[4923]: I0321 04:18:33.537699 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:33 crc kubenswrapper[4923]: I0321 04:18:33.537715 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:33 crc kubenswrapper[4923]: I0321 04:18:33.537744 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:33 crc kubenswrapper[4923]: I0321 04:18:33.537758 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:33Z","lastTransitionTime":"2026-03-21T04:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:33 crc kubenswrapper[4923]: E0321 04:18:33.554801 4923 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cf8d7db8-ca59-4aa1-a66b-77ca41867327\\\",\\\"systemUUID\\\":\\\"cc949092-1947-409d-b7e5-479c6a1f7b47\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 04:18:33 crc kubenswrapper[4923]: E0321 04:18:33.555019 4923 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 21 04:18:33 crc kubenswrapper[4923]: E0321 04:18:33.555064 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:33 crc kubenswrapper[4923]: E0321 04:18:33.655603 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:33 crc kubenswrapper[4923]: E0321 04:18:33.756241 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:33 crc kubenswrapper[4923]: E0321 04:18:33.857390 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:33 crc kubenswrapper[4923]: E0321 04:18:33.958460 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:34 crc kubenswrapper[4923]: E0321 04:18:34.059021 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:34 crc kubenswrapper[4923]: E0321 04:18:34.159821 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:34 crc kubenswrapper[4923]: E0321 04:18:34.260259 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:34 crc kubenswrapper[4923]: E0321 04:18:34.361192 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:34 crc kubenswrapper[4923]: E0321 04:18:34.461672 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:34 crc kubenswrapper[4923]: E0321 04:18:34.562794 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:34 crc kubenswrapper[4923]: E0321 04:18:34.663876 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:34 crc kubenswrapper[4923]: I0321 04:18:34.712068 4923 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 21 04:18:34 crc kubenswrapper[4923]: I0321 04:18:34.747495 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 04:18:34 crc kubenswrapper[4923]: I0321 04:18:34.747739 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:18:34 crc kubenswrapper[4923]: I0321 04:18:34.749201 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:34 crc kubenswrapper[4923]: I0321 04:18:34.749249 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:34 crc kubenswrapper[4923]: I0321 04:18:34.749264 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:34 crc kubenswrapper[4923]: E0321 04:18:34.764879 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:34 crc kubenswrapper[4923]: E0321 04:18:34.866004 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:34 crc kubenswrapper[4923]: E0321 04:18:34.966693 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:35 crc kubenswrapper[4923]: E0321 04:18:35.067749 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:35 crc kubenswrapper[4923]: E0321 04:18:35.168729 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:35 crc kubenswrapper[4923]: E0321 04:18:35.269186 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:35 crc kubenswrapper[4923]: E0321 04:18:35.370028 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:35 crc kubenswrapper[4923]: E0321 04:18:35.470998 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:35 crc kubenswrapper[4923]: E0321 04:18:35.571882 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:35 crc kubenswrapper[4923]: E0321 04:18:35.672813 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:35 crc kubenswrapper[4923]: E0321 04:18:35.773841 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:35 crc kubenswrapper[4923]: E0321 04:18:35.874417 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:35 crc kubenswrapper[4923]: E0321 04:18:35.975388 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:36 crc kubenswrapper[4923]: E0321 04:18:36.076194 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:36 crc kubenswrapper[4923]: E0321 04:18:36.176641 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:36 crc kubenswrapper[4923]: E0321 04:18:36.277140 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:36 crc kubenswrapper[4923]: E0321 04:18:36.377944 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:36 crc kubenswrapper[4923]: E0321 04:18:36.454213 4923 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 21 04:18:36 crc kubenswrapper[4923]: E0321 04:18:36.478149 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:36 crc kubenswrapper[4923]: E0321 04:18:36.578909 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:36 crc kubenswrapper[4923]: E0321 04:18:36.680024 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:36 crc kubenswrapper[4923]: E0321 04:18:36.780218 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:36 crc kubenswrapper[4923]: E0321 04:18:36.880823 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:36 crc kubenswrapper[4923]: E0321 04:18:36.981733 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:37 crc kubenswrapper[4923]: E0321 04:18:37.082157 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:37 crc kubenswrapper[4923]: E0321 04:18:37.183219 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:37 crc kubenswrapper[4923]: E0321 04:18:37.283811 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:37 crc kubenswrapper[4923]: E0321 04:18:37.384682 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:37 crc kubenswrapper[4923]: E0321 04:18:37.485669 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:37 crc kubenswrapper[4923]: E0321 04:18:37.586618 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:37 crc kubenswrapper[4923]: E0321 04:18:37.687765 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:37 crc kubenswrapper[4923]: E0321 04:18:37.788262 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:37 crc kubenswrapper[4923]: E0321 04:18:37.889237 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:37 crc kubenswrapper[4923]: E0321 04:18:37.990247 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:38 crc kubenswrapper[4923]: E0321 04:18:38.091278 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:38 crc kubenswrapper[4923]: E0321 04:18:38.192408 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:38 crc kubenswrapper[4923]: E0321 04:18:38.293390 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:38 crc kubenswrapper[4923]: E0321 04:18:38.394354 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:38 crc kubenswrapper[4923]: E0321 04:18:38.495523 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:38 crc kubenswrapper[4923]: E0321 04:18:38.596689 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:38 crc kubenswrapper[4923]: E0321 04:18:38.697529 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:38 crc kubenswrapper[4923]: E0321 04:18:38.798499 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:38 crc kubenswrapper[4923]: E0321 04:18:38.899239 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:39 crc kubenswrapper[4923]: E0321 04:18:39.000448 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:39 crc kubenswrapper[4923]: E0321 04:18:39.101020 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:39 crc kubenswrapper[4923]: E0321 04:18:39.201671 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:39 crc kubenswrapper[4923]: E0321 04:18:39.302359 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:39 crc kubenswrapper[4923]: E0321 04:18:39.403075 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:39 crc kubenswrapper[4923]: E0321 04:18:39.503553 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:39 crc kubenswrapper[4923]: E0321 04:18:39.603731 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:39 crc kubenswrapper[4923]: E0321 04:18:39.704285 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:39 crc kubenswrapper[4923]: E0321 04:18:39.805275 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:39 crc kubenswrapper[4923]: E0321 04:18:39.905405 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:40 crc kubenswrapper[4923]: E0321 04:18:40.006378 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:40 crc kubenswrapper[4923]: E0321 04:18:40.106789 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:40 crc kubenswrapper[4923]: E0321 04:18:40.207145 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:40 crc kubenswrapper[4923]: I0321 04:18:40.254272 4923 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 21 04:18:40 crc kubenswrapper[4923]: E0321 04:18:40.308359 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:40 crc kubenswrapper[4923]: E0321 04:18:40.409044 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:40 crc kubenswrapper[4923]: E0321 04:18:40.509379 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:40 crc kubenswrapper[4923]: E0321 04:18:40.610451 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:40 crc kubenswrapper[4923]: E0321 04:18:40.710610 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:40 crc kubenswrapper[4923]: E0321 04:18:40.811380 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:40 crc kubenswrapper[4923]: E0321 04:18:40.912120 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:41 crc kubenswrapper[4923]: E0321 04:18:41.013192 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:41 crc kubenswrapper[4923]: E0321 04:18:41.113425 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:41 crc kubenswrapper[4923]: E0321 04:18:41.214454 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:41 crc kubenswrapper[4923]: E0321 04:18:41.315086 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:41 crc kubenswrapper[4923]: E0321 04:18:41.415891 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:41 crc kubenswrapper[4923]: E0321 04:18:41.517044 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:41 crc kubenswrapper[4923]: E0321 04:18:41.617567 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:41 crc kubenswrapper[4923]: E0321 04:18:41.717685 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:41 crc kubenswrapper[4923]: E0321 04:18:41.817830 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:41 crc kubenswrapper[4923]: E0321 04:18:41.919005 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:42 crc kubenswrapper[4923]: E0321 04:18:42.019735 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:42 crc kubenswrapper[4923]: E0321 04:18:42.120797 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:42 crc kubenswrapper[4923]: E0321 04:18:42.221746 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:42 crc kubenswrapper[4923]: E0321 04:18:42.322580 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:42 crc kubenswrapper[4923]: E0321 04:18:42.423377 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:42 crc kubenswrapper[4923]: E0321 04:18:42.523513 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:42 crc kubenswrapper[4923]: E0321 04:18:42.624676 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:42 crc kubenswrapper[4923]: E0321 04:18:42.725382 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:42 crc kubenswrapper[4923]: E0321 04:18:42.826535 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:42 crc kubenswrapper[4923]: E0321 04:18:42.927258 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:43 crc kubenswrapper[4923]: E0321 04:18:43.028369 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:43 crc kubenswrapper[4923]: E0321 04:18:43.129385 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:43 crc kubenswrapper[4923]: E0321 04:18:43.229801 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:43 crc kubenswrapper[4923]: E0321 04:18:43.330487 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:43 crc kubenswrapper[4923]: I0321 04:18:43.358208 4923 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 21 04:18:43 crc kubenswrapper[4923]: I0321 04:18:43.359540 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:43 crc kubenswrapper[4923]: I0321 04:18:43.359626 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:43 crc kubenswrapper[4923]: I0321 04:18:43.359655 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:43 crc kubenswrapper[4923]: I0321 04:18:43.360826 4923 scope.go:117] "RemoveContainer" containerID="255966e1808c105be31fb4f94cbc0d42750c3eed0dbb11cc0da41840834dbdec" Mar 21 04:18:43 crc kubenswrapper[4923]: E0321 04:18:43.361239 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 21 04:18:43 crc kubenswrapper[4923]: E0321 04:18:43.431000 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:43 crc kubenswrapper[4923]: E0321 04:18:43.531869 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:43 crc kubenswrapper[4923]: E0321 04:18:43.632394 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:43 crc kubenswrapper[4923]: E0321 04:18:43.732751 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:43 crc kubenswrapper[4923]: E0321 04:18:43.833702 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:43 crc kubenswrapper[4923]: E0321 04:18:43.933801 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:43 crc kubenswrapper[4923]: E0321 04:18:43.936002 4923 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 21 04:18:43 crc kubenswrapper[4923]: I0321 04:18:43.939792 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:43 crc kubenswrapper[4923]: I0321 04:18:43.939842 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:43 crc kubenswrapper[4923]: I0321 04:18:43.939855 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:43 crc kubenswrapper[4923]: I0321 04:18:43.939870 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:43 crc kubenswrapper[4923]: I0321 04:18:43.939882 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:43Z","lastTransitionTime":"2026-03-21T04:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:43 crc kubenswrapper[4923]: E0321 04:18:43.952736 4923 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cf8d7db8-ca59-4aa1-a66b-77ca41867327\\\",\\\"systemUUID\\\":\\\"cc949092-1947-409d-b7e5-479c6a1f7b47\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 04:18:43 crc kubenswrapper[4923]: I0321 04:18:43.956377 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:43 crc kubenswrapper[4923]: I0321 04:18:43.956445 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:43 crc kubenswrapper[4923]: I0321 04:18:43.956466 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:43 crc kubenswrapper[4923]: I0321 04:18:43.956494 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:43 crc kubenswrapper[4923]: I0321 04:18:43.956516 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:43Z","lastTransitionTime":"2026-03-21T04:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:43 crc kubenswrapper[4923]: E0321 04:18:43.966164 4923 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cf8d7db8-ca59-4aa1-a66b-77ca41867327\\\",\\\"systemUUID\\\":\\\"cc949092-1947-409d-b7e5-479c6a1f7b47\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 04:18:43 crc kubenswrapper[4923]: I0321 04:18:43.970116 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:43 crc kubenswrapper[4923]: I0321 04:18:43.970174 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:43 crc kubenswrapper[4923]: I0321 04:18:43.970187 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:43 crc kubenswrapper[4923]: I0321 04:18:43.970204 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:43 crc kubenswrapper[4923]: I0321 04:18:43.970218 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:43Z","lastTransitionTime":"2026-03-21T04:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:43 crc kubenswrapper[4923]: E0321 04:18:43.978017 4923 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cf8d7db8-ca59-4aa1-a66b-77ca41867327\\\",\\\"systemUUID\\\":\\\"cc949092-1947-409d-b7e5-479c6a1f7b47\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 04:18:43 crc kubenswrapper[4923]: I0321 04:18:43.981703 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:43 crc kubenswrapper[4923]: I0321 04:18:43.981733 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:43 crc kubenswrapper[4923]: I0321 04:18:43.981745 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:43 crc kubenswrapper[4923]: I0321 04:18:43.981761 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:43 crc kubenswrapper[4923]: I0321 04:18:43.981772 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:43Z","lastTransitionTime":"2026-03-21T04:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:43 crc kubenswrapper[4923]: E0321 04:18:43.989809 4923 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cf8d7db8-ca59-4aa1-a66b-77ca41867327\\\",\\\"systemUUID\\\":\\\"cc949092-1947-409d-b7e5-479c6a1f7b47\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 04:18:43 crc kubenswrapper[4923]: E0321 04:18:43.989926 4923 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 21 04:18:44 crc kubenswrapper[4923]: E0321 04:18:44.034261 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:44 crc kubenswrapper[4923]: E0321 04:18:44.134418 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:44 crc kubenswrapper[4923]: E0321 04:18:44.235270 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:44 crc kubenswrapper[4923]: E0321 04:18:44.335438 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:44 crc kubenswrapper[4923]: E0321 04:18:44.436201 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:44 crc kubenswrapper[4923]: E0321 04:18:44.537101 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:44 crc kubenswrapper[4923]: E0321 04:18:44.637887 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:44 crc kubenswrapper[4923]: E0321 04:18:44.738770 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:44 crc kubenswrapper[4923]: E0321 04:18:44.840018 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:44 crc kubenswrapper[4923]: E0321 04:18:44.940735 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:45 crc kubenswrapper[4923]: E0321 04:18:45.041425 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:45 crc kubenswrapper[4923]: E0321 04:18:45.142390 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:45 crc kubenswrapper[4923]: E0321 04:18:45.242673 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:45 crc kubenswrapper[4923]: E0321 04:18:45.343651 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:45 crc kubenswrapper[4923]: E0321 04:18:45.443785 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:45 crc kubenswrapper[4923]: E0321 04:18:45.544400 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:45 crc kubenswrapper[4923]: E0321 04:18:45.644778 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:45 crc kubenswrapper[4923]: E0321 04:18:45.745905 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:45 crc kubenswrapper[4923]: E0321 04:18:45.846050 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:45 crc kubenswrapper[4923]: E0321 04:18:45.947186 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:46 crc kubenswrapper[4923]: E0321 04:18:46.048253 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:46 crc kubenswrapper[4923]: E0321 04:18:46.148807 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:46 crc kubenswrapper[4923]: E0321 04:18:46.249766 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:46 crc kubenswrapper[4923]: E0321 04:18:46.350789 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:46 crc kubenswrapper[4923]: E0321 04:18:46.451894 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:46 crc kubenswrapper[4923]: E0321 04:18:46.454466 4923 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 21 04:18:46 crc kubenswrapper[4923]: E0321 04:18:46.552877 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:46 crc kubenswrapper[4923]: E0321 04:18:46.653579 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:46 crc kubenswrapper[4923]: E0321 04:18:46.754339 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:46 crc kubenswrapper[4923]: E0321 04:18:46.855520 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:46 crc kubenswrapper[4923]: E0321 04:18:46.955800 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:47 crc kubenswrapper[4923]: E0321 04:18:47.056865 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:47 crc kubenswrapper[4923]: E0321 04:18:47.157228 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:47 crc kubenswrapper[4923]: E0321 04:18:47.257412 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:47 crc kubenswrapper[4923]: E0321 04:18:47.385440 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:47 crc kubenswrapper[4923]: E0321 04:18:47.485602 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:47 crc kubenswrapper[4923]: E0321 04:18:47.585878 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:47 crc kubenswrapper[4923]: E0321 04:18:47.686187 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:47 crc kubenswrapper[4923]: E0321 04:18:47.787223 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:47 crc kubenswrapper[4923]: E0321 04:18:47.887621 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:47 crc kubenswrapper[4923]: E0321 04:18:47.988216 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:48 crc kubenswrapper[4923]: E0321 04:18:48.088465 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:48 crc kubenswrapper[4923]: E0321 04:18:48.189468 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:48 crc kubenswrapper[4923]: E0321 04:18:48.290505 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:48 crc kubenswrapper[4923]: E0321 04:18:48.391402 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:48 crc kubenswrapper[4923]: E0321 04:18:48.491857 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:48 crc kubenswrapper[4923]: E0321 04:18:48.592889 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:48 crc kubenswrapper[4923]: E0321 04:18:48.693549 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:48 crc kubenswrapper[4923]: E0321 04:18:48.793838 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:48 crc kubenswrapper[4923]: E0321 04:18:48.894365 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:48 crc kubenswrapper[4923]: E0321 04:18:48.994890 4923 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.060048 4923 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.097753 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.097808 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.097826 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.097850 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.097869 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:49Z","lastTransitionTime":"2026-03-21T04:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.200694 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.200763 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.200787 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.200819 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.200872 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:49Z","lastTransitionTime":"2026-03-21T04:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.253711 4923 apiserver.go:52] "Watching apiserver" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.259311 4923 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.259646 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.260240 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.260400 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 04:18:49 crc kubenswrapper[4923]: E0321 04:18:49.260659 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.260696 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:18:49 crc kubenswrapper[4923]: E0321 04:18:49.260792 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.261382 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.261425 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.261535 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:18:49 crc kubenswrapper[4923]: E0321 04:18:49.261610 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.262719 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.263184 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.264015 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.264041 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.264083 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.264731 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.264986 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.265255 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.265446 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.303730 4923 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.304219 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.304260 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.304273 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.304290 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.304301 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:49Z","lastTransitionTime":"2026-03-21T04:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.317610 4923 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.328501 4923 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.337841 4923 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.346191 4923 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.359113 4923 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.361338 4923 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.373775 4923 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.398142 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.398298 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.398408 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.398487 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.398563 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.398626 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.398695 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.398761 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.398831 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.398898 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.398974 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.399045 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.399088 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.399116 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.399199 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.399216 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.399236 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.399304 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.399354 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.399369 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.399442 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.399550 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.399569 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.399560 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.399383 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.399860 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.399906 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.399991 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.400120 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.400350 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.400364 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.400378 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.400426 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.400606 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.400609 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.400759 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.400789 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.400817 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.400804 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.400844 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.400899 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.400923 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.400944 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.400968 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.400990 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401013 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401008 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401035 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401043 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401057 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401128 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401246 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401294 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401359 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401401 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401437 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401473 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401513 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401546 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401581 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401618 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401658 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401691 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401725 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401763 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401798 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401832 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401956 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.402002 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.402040 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.402077 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.402119 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.402156 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.402199 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.402233 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401398 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.402271 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401464 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.402307 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.402370 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.402406 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.402440 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.402428 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401486 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401519 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401875 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401902 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401697 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401920 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.401996 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.402245 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.402561 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.402577 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: E0321 04:18:49.402797 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:18:49.902730977 +0000 UTC m=+95.055742064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.403266 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.403298 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.403530 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.403562 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.403629 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.403814 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.402478 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.403832 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.403926 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.403955 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404003 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404030 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404058 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.403958 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404084 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404107 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404131 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404158 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404181 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404207 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404228 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404251 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404273 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404296 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404320 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404353 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404360 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404453 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404485 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404511 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404507 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404537 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404562 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404584 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404608 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404634 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404660 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404683 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404707 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404733 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404759 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404783 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404806 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404830 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404852 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.405506 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.405547 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.405571 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.406502 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.406683 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.406755 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.406993 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.407081 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404632 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404654 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404670 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404669 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404904 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404898 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.407232 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404920 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.405117 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.405297 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.405324 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.407273 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.405315 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.405951 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.405995 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.406039 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.406262 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.404588 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.407115 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.407144 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.407299 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.408092 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.408395 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.408504 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.408581 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.408795 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.408871 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.408939 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.409012 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.409077 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.409144 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.409207 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.409271 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.408257 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.408305 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.408462 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.409409 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.408471 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.408870 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.408630 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.409269 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.409370 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.409577 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.409718 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.409765 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.409793 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.409821 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.409845 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.409871 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.409898 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.409894 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.410007 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.409927 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.410226 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.410256 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.410290 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.410312 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:49Z","lastTransitionTime":"2026-03-21T04:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.410780 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.410985 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411047 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.409922 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411152 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411155 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411186 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411211 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411234 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411258 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411281 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411304 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411350 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.410874 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411382 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411476 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411472 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411559 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411584 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411607 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411626 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411630 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411645 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411640 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411664 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411684 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411685 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411701 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411718 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411733 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411738 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411749 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411766 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411782 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411803 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411819 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411838 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411853 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411870 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411879 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411831 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.412172 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.412190 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.411889 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.412261 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.412279 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.412312 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.412483 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.412499 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.412536 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.412519 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.412575 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.412593 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.412758 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.412778 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.412803 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.412830 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.412853 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.412873 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.412893 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.412918 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.412944 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.412963 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.412979 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413000 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413029 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413054 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413073 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413098 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413122 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413147 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413170 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413188 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413212 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413242 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413265 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413287 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413309 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413367 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413390 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413413 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413436 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413460 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413483 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413510 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413533 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413582 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413615 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413641 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413665 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413690 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413723 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413755 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413779 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413803 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413832 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413858 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413883 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413909 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413934 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414015 4923 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414031 4923 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414045 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414058 4923 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414070 4923 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414083 4923 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414096 4923 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414110 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414134 4923 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414147 4923 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414159 4923 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414171 4923 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414183 4923 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414198 4923 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414212 4923 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414226 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414242 4923 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414255 4923 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414267 4923 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414280 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414293 4923 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414305 4923 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414323 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414349 4923 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414363 4923 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414377 4923 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414393 4923 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414406 4923 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414418 4923 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414430 4923 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414443 4923 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414455 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414467 4923 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414480 4923 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414495 4923 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414507 4923 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414519 4923 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414531 4923 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414543 4923 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414624 4923 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414639 4923 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414651 4923 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414663 4923 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414677 4923 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414719 4923 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414732 4923 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414744 4923 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414757 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414770 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414782 4923 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414794 4923 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414806 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414821 4923 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414834 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414847 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414861 4923 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414874 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414886 4923 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414899 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414914 4923 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414926 4923 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414940 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414954 4923 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414965 4923 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414978 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414991 4923 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415005 4923 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415023 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415037 4923 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415048 4923 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415061 4923 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415074 4923 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415087 4923 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415100 4923 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415112 4923 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415126 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415137 4923 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415151 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415164 4923 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415176 4923 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415188 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415201 4923 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415215 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415228 4923 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415243 4923 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.412587 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.412790 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413583 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413705 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.413833 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414088 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414192 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414304 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414524 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414902 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.414997 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415047 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415033 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415203 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415598 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415715 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415257 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415920 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415934 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415963 4923 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415983 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.415927 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.416144 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.416416 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.416599 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.416615 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.416633 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.416648 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.416962 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.417011 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.417078 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: E0321 04:18:49.417107 4923 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 04:18:49 crc kubenswrapper[4923]: E0321 04:18:49.417205 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 04:18:49.917174589 +0000 UTC m=+95.070185726 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.417280 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.417409 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.417472 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: E0321 04:18:49.417568 4923 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 04:18:49 crc kubenswrapper[4923]: E0321 04:18:49.417654 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 04:18:49.917627472 +0000 UTC m=+95.070638649 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.417702 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.417752 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.417965 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.417988 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.418196 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.418426 4923 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.418446 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.418516 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.419796 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.420035 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.420562 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.420632 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.420672 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.421466 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: E0321 04:18:49.432880 4923 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 04:18:49 crc kubenswrapper[4923]: E0321 04:18:49.432923 4923 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 04:18:49 crc kubenswrapper[4923]: E0321 04:18:49.432942 4923 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 04:18:49 crc kubenswrapper[4923]: E0321 04:18:49.433014 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-21 04:18:49.932988822 +0000 UTC m=+95.085999919 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.433243 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.433579 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.435172 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.436976 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.437266 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.437356 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.437635 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.437651 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.438818 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.438958 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.439232 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.439273 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.439422 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.439716 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.439746 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.440957 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.440978 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 21 04:18:49 crc kubenswrapper[4923]: E0321 04:18:49.441130 4923 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 04:18:49 crc kubenswrapper[4923]: E0321 04:18:49.441152 4923 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 04:18:49 crc kubenswrapper[4923]: E0321 04:18:49.441354 4923 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 04:18:49 crc kubenswrapper[4923]: E0321 04:18:49.441412 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-21 04:18:49.941391843 +0000 UTC m=+95.094402940 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.442992 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.443482 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.445124 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.445239 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.445527 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.446844 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.447268 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.449110 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.449561 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.449713 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.449851 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.449913 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.449905 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.449962 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.450012 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.450416 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.451025 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.451125 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.451142 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.451202 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.451221 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.451313 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.451382 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.451466 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.451976 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.451592 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.453202 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.453283 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.454825 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.454984 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.454941 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.455274 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.455300 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.455761 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.455850 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.455899 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.456546 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.456745 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.456796 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.456824 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.456850 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.457103 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.457114 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.457310 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.457418 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.457557 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.457587 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.458176 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.458399 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.461660 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.473276 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.492275 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.503676 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.512863 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.512901 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.512913 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.512930 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.512942 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:49Z","lastTransitionTime":"2026-03-21T04:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517174 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517230 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517288 4923 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517344 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517358 4923 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517370 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517384 4923 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517403 4923 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517418 4923 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517434 4923 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517447 4923 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517460 4923 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517471 4923 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517483 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517495 4923 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517507 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517519 4923 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517530 4923 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517523 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517542 4923 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517630 4923 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517653 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517674 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517693 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517711 4923 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517730 4923 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517747 4923 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517765 4923 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517782 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517807 4923 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517825 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517843 4923 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517861 4923 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517880 4923 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517898 4923 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517915 4923 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517932 4923 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517951 4923 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.517969 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518055 4923 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518079 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518098 4923 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518115 4923 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518132 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518150 4923 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518166 4923 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518184 4923 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518201 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518219 4923 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518233 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518240 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518284 4923 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518298 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518312 4923 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518343 4923 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518356 4923 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518368 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518382 4923 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518396 4923 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518410 4923 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518422 4923 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518434 4923 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518447 4923 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518458 4923 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518469 4923 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518481 4923 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518493 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518505 4923 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518516 4923 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518527 4923 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518540 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518554 4923 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518565 4923 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518577 4923 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518588 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518602 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518615 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518626 4923 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518638 4923 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518651 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518662 4923 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518674 4923 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518686 4923 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518698 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518710 4923 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518722 4923 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518737 4923 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518750 4923 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518763 4923 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518774 4923 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518786 4923 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518804 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518816 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518828 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518839 4923 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518851 4923 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518865 4923 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518877 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518889 4923 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518900 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518913 4923 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518925 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518937 4923 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518948 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518964 4923 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518977 4923 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.518990 4923 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.519002 4923 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.519014 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.580288 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.593259 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.598419 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.616068 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.616112 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.616127 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.616148 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.616161 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:49Z","lastTransitionTime":"2026-03-21T04:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:49 crc kubenswrapper[4923]: W0321 04:18:49.617643 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-94eb2bd250bd4e3b883ca950405ee430b884aea29230dd75dbcd4904afdfae6a WatchSource:0}: Error finding container 94eb2bd250bd4e3b883ca950405ee430b884aea29230dd75dbcd4904afdfae6a: Status 404 returned error can't find the container with id 94eb2bd250bd4e3b883ca950405ee430b884aea29230dd75dbcd4904afdfae6a Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.719033 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.719079 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.719091 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.719109 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.719124 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:49Z","lastTransitionTime":"2026-03-21T04:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.741103 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"91bef3a34796c62d6c0e097774eef34b507cf2d9eef5ad2d66be94ed37f05fe8"} Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.743130 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"94eb2bd250bd4e3b883ca950405ee430b884aea29230dd75dbcd4904afdfae6a"} Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.744520 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"57b1536bd7145f5a4053c6da15301bc256ca499dbeef0b7e99de21d17873bdb0"} Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.822988 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.823029 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.823058 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.823080 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.823099 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:49Z","lastTransitionTime":"2026-03-21T04:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.923150 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.923270 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.923467 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:18:49 crc kubenswrapper[4923]: E0321 04:18:49.923528 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:18:50.923491157 +0000 UTC m=+96.076502304 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:18:49 crc kubenswrapper[4923]: E0321 04:18:49.923538 4923 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 04:18:49 crc kubenswrapper[4923]: E0321 04:18:49.923580 4923 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 04:18:49 crc kubenswrapper[4923]: E0321 04:18:49.923658 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 04:18:50.923634022 +0000 UTC m=+96.076645149 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 04:18:49 crc kubenswrapper[4923]: E0321 04:18:49.923691 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 04:18:50.923673863 +0000 UTC m=+96.076684980 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.925487 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.925589 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.925809 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.925909 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:49 crc kubenswrapper[4923]: I0321 04:18:49.925937 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:49Z","lastTransitionTime":"2026-03-21T04:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.024701 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.024814 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:18:50 crc kubenswrapper[4923]: E0321 04:18:50.024958 4923 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 04:18:50 crc kubenswrapper[4923]: E0321 04:18:50.025017 4923 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 04:18:50 crc kubenswrapper[4923]: E0321 04:18:50.025056 4923 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 04:18:50 crc kubenswrapper[4923]: E0321 04:18:50.025074 4923 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 04:18:50 crc kubenswrapper[4923]: E0321 04:18:50.025220 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-21 04:18:51.0251959 +0000 UTC m=+96.178206997 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 04:18:50 crc kubenswrapper[4923]: E0321 04:18:50.025024 4923 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 04:18:50 crc kubenswrapper[4923]: E0321 04:18:50.025430 4923 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 04:18:50 crc kubenswrapper[4923]: E0321 04:18:50.025551 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-21 04:18:51.02551201 +0000 UTC m=+96.178523277 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.028240 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.028274 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.028283 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.028298 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.028309 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:50Z","lastTransitionTime":"2026-03-21T04:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.130462 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.130519 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.130528 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.130545 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.130555 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:50Z","lastTransitionTime":"2026-03-21T04:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.233370 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.233418 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.233431 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.233455 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.233469 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:50Z","lastTransitionTime":"2026-03-21T04:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.336047 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.336115 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.336139 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.336168 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.336191 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:50Z","lastTransitionTime":"2026-03-21T04:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.358272 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.358302 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.358443 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 04:18:50 crc kubenswrapper[4923]: E0321 04:18:50.358552 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 04:18:50 crc kubenswrapper[4923]: E0321 04:18:50.358991 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 04:18:50 crc kubenswrapper[4923]: E0321 04:18:50.359072 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.363219 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.364098 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.365307 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.366005 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.371648 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.372989 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.374753 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.375927 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.377438 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.378096 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.378782 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.380286 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.380985 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.382160 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.382868 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.384036 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.384748 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.385452 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.386632 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.387604 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.388220 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.389629 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.390172 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.391539 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.392084 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.393444 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.394263 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.395467 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.396202 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.397347 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.397976 4923 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.398115 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.400365 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.401515 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.402016 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.403947 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.405189 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.405957 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.407273 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.408129 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.409224 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.410069 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.411347 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.412110 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.413211 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.414041 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.415282 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.416242 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.417501 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.418251 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.419322 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.420034 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.420775 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.421871 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.422596 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.439479 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.439534 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.439546 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.439563 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.439576 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:50Z","lastTransitionTime":"2026-03-21T04:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.542280 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.542357 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.542370 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.542391 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.542406 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:50Z","lastTransitionTime":"2026-03-21T04:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.645364 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.645430 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.645440 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.645458 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.645468 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:50Z","lastTransitionTime":"2026-03-21T04:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.747854 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.747908 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.747921 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.747940 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.747953 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:50Z","lastTransitionTime":"2026-03-21T04:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.750075 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"bebc2e636580c9e6433a3a03a5b64eb62f0b1fc63d8b7396b914a7a871de2099"} Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.750368 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"40d0283c3be97523fb4f821002ee3e0bc0cc5a1c785bba85e3a223456f4af23f"} Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.751950 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e9b536882db680fed6f078fe9ca0f6af7043110aee5dcfe980c8796133516345"} Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.773378 4923 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:50Z is after 2025-08-24T17:21:41Z" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.791929 4923 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:50Z is after 2025-08-24T17:21:41Z" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.807994 4923 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:50Z is after 2025-08-24T17:21:41Z" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.822668 4923 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:50Z is after 2025-08-24T17:21:41Z" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.836054 4923 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bebc2e636580c9e6433a3a03a5b64eb62f0b1fc63d8b7396b914a7a871de2099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T04:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40d0283c3be97523fb4f821002ee3e0bc0cc5a1c785bba85e3a223456f4af23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T04:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:50Z is after 2025-08-24T17:21:41Z" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.846181 4923 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcaad79-bba0-488f-9b38-8c16c8a236dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T04:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T04:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T04:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T04:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T04:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759950f1ffb44fd2cb2df367b5347dd03ea2b0aeeb150258689cc18ac37875a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T04:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e46424958e68afb6fc6e4784275047e6dcfce223e753bc4f31a4e6f131a393e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e46424958e68afb6fc6e4784275047e6dcfce223e753bc4f31a4e6f131a393e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T04:17:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T04:17:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T04:17:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:50Z is after 2025-08-24T17:21:41Z" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.852196 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.852245 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.852262 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.852286 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.852302 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:50Z","lastTransitionTime":"2026-03-21T04:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.861486 4923 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:50Z is after 2025-08-24T17:21:41Z" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.875301 4923 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9b536882db680fed6f078fe9ca0f6af7043110aee5dcfe980c8796133516345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T04:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:50Z is after 2025-08-24T17:21:41Z" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.886721 4923 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:50Z is after 2025-08-24T17:21:41Z" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.902274 4923 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:50Z is after 2025-08-24T17:21:41Z" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.923447 4923 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bebc2e636580c9e6433a3a03a5b64eb62f0b1fc63d8b7396b914a7a871de2099\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T04:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40d0283c3be97523fb4f821002ee3e0bc0cc5a1c785bba85e3a223456f4af23f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T04:18:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:50Z is after 2025-08-24T17:21:41Z" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.933707 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.933898 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.933970 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:18:50 crc kubenswrapper[4923]: E0321 04:18:50.934077 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:18:52.934057543 +0000 UTC m=+98.087068630 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:18:50 crc kubenswrapper[4923]: E0321 04:18:50.934111 4923 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 04:18:50 crc kubenswrapper[4923]: E0321 04:18:50.934297 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 04:18:52.934267919 +0000 UTC m=+98.087279036 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 04:18:50 crc kubenswrapper[4923]: E0321 04:18:50.934176 4923 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 04:18:50 crc kubenswrapper[4923]: E0321 04:18:50.934459 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 04:18:52.934433464 +0000 UTC m=+98.087444651 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.942263 4923 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:50Z is after 2025-08-24T17:21:41Z" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.955158 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.955377 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.955466 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.955547 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.955637 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:50Z","lastTransitionTime":"2026-03-21T04:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.958532 4923 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-21T04:18:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:50Z is after 2025-08-24T17:21:41Z" Mar 21 04:18:50 crc kubenswrapper[4923]: I0321 04:18:50.973918 4923 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcaad79-bba0-488f-9b38-8c16c8a236dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T04:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T04:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T04:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T04:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-21T04:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://759950f1ffb44fd2cb2df367b5347dd03ea2b0aeeb150258689cc18ac37875a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-21T04:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e46424958e68afb6fc6e4784275047e6dcfce223e753bc4f31a4e6f131a393e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e46424958e68afb6fc6e4784275047e6dcfce223e753bc4f31a4e6f131a393e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-21T04:17:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-21T04:17:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-21T04:17:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-21T04:18:50Z is after 2025-08-24T17:21:41Z" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.035123 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.035437 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:18:51 crc kubenswrapper[4923]: E0321 04:18:51.035335 4923 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 04:18:51 crc kubenswrapper[4923]: E0321 04:18:51.035638 4923 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 04:18:51 crc kubenswrapper[4923]: E0321 04:18:51.035507 4923 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 04:18:51 crc kubenswrapper[4923]: E0321 04:18:51.035719 4923 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 04:18:51 crc kubenswrapper[4923]: E0321 04:18:51.035733 4923 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 04:18:51 crc kubenswrapper[4923]: E0321 04:18:51.035792 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-21 04:18:53.035776056 +0000 UTC m=+98.188787143 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 04:18:51 crc kubenswrapper[4923]: E0321 04:18:51.035702 4923 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 04:18:51 crc kubenswrapper[4923]: E0321 04:18:51.035955 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-21 04:18:53.035945571 +0000 UTC m=+98.188956658 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.057987 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.058249 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.058382 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.058483 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.058552 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:51Z","lastTransitionTime":"2026-03-21T04:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.161134 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.161196 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.161212 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.161230 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.161243 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:51Z","lastTransitionTime":"2026-03-21T04:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.263451 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.263541 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.263564 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.263593 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.263615 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:51Z","lastTransitionTime":"2026-03-21T04:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.366685 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.366748 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.366766 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.366789 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.366807 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:51Z","lastTransitionTime":"2026-03-21T04:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.468789 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.468825 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.468833 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.468846 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.468857 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:51Z","lastTransitionTime":"2026-03-21T04:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.570511 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.570538 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.570548 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.570561 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.570570 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:51Z","lastTransitionTime":"2026-03-21T04:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.673091 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.673124 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.673134 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.673149 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.673160 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:51Z","lastTransitionTime":"2026-03-21T04:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.775349 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.775385 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.775396 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.775410 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.775420 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:51Z","lastTransitionTime":"2026-03-21T04:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.877715 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.877756 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.877768 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.877787 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.877798 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:51Z","lastTransitionTime":"2026-03-21T04:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.980542 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.980613 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.980639 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.980666 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:51 crc kubenswrapper[4923]: I0321 04:18:51.980686 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:51Z","lastTransitionTime":"2026-03-21T04:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.083061 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.083419 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.083562 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.083709 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.083848 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:52Z","lastTransitionTime":"2026-03-21T04:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.186363 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.186657 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.186841 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.187030 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.187216 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:52Z","lastTransitionTime":"2026-03-21T04:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.290470 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.290552 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.290577 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.290607 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.290626 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:52Z","lastTransitionTime":"2026-03-21T04:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.357872 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.357960 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.358465 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:18:52 crc kubenswrapper[4923]: E0321 04:18:52.358713 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 04:18:52 crc kubenswrapper[4923]: E0321 04:18:52.359209 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 04:18:52 crc kubenswrapper[4923]: E0321 04:18:52.359483 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.392914 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.392973 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.392990 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.393012 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.393029 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:52Z","lastTransitionTime":"2026-03-21T04:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.496122 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.496167 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.496187 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.496208 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.496227 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:52Z","lastTransitionTime":"2026-03-21T04:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.598775 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.598823 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.598840 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.598859 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.598870 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:52Z","lastTransitionTime":"2026-03-21T04:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.701092 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.701172 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.701193 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.701221 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.701240 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:52Z","lastTransitionTime":"2026-03-21T04:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.759601 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"bc6e1de3ab1b81e51e78ec698d1d3bccb6ed6dae0f2c74bdc951516e18d90adb"} Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.805133 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.805193 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.805211 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.805233 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.805253 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:52Z","lastTransitionTime":"2026-03-21T04:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.833915 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=2.833893344 podStartE2EDuration="2.833893344s" podCreationTimestamp="2026-03-21 04:18:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:18:52.811906626 +0000 UTC m=+97.964917743" watchObservedRunningTime="2026-03-21 04:18:52.833893344 +0000 UTC m=+97.986904441" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.912733 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.912788 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.912801 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.912820 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.912831 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:52Z","lastTransitionTime":"2026-03-21T04:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.952871 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.952976 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:18:52 crc kubenswrapper[4923]: I0321 04:18:52.953035 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:18:52 crc kubenswrapper[4923]: E0321 04:18:52.953114 4923 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 04:18:52 crc kubenswrapper[4923]: E0321 04:18:52.953114 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:18:56.95307888 +0000 UTC m=+102.106089987 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:18:52 crc kubenswrapper[4923]: E0321 04:18:52.953169 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 04:18:56.953152952 +0000 UTC m=+102.106164049 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 04:18:52 crc kubenswrapper[4923]: E0321 04:18:52.953201 4923 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 04:18:52 crc kubenswrapper[4923]: E0321 04:18:52.953303 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 04:18:56.953281266 +0000 UTC m=+102.106292453 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.015678 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.015713 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.015724 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.015741 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.015752 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:53Z","lastTransitionTime":"2026-03-21T04:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.053507 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.053567 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:18:53 crc kubenswrapper[4923]: E0321 04:18:53.053690 4923 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 04:18:53 crc kubenswrapper[4923]: E0321 04:18:53.053723 4923 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 04:18:53 crc kubenswrapper[4923]: E0321 04:18:53.053736 4923 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 04:18:53 crc kubenswrapper[4923]: E0321 04:18:53.053745 4923 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 04:18:53 crc kubenswrapper[4923]: E0321 04:18:53.053770 4923 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 04:18:53 crc kubenswrapper[4923]: E0321 04:18:53.053782 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-21 04:18:57.053766732 +0000 UTC m=+102.206777829 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 04:18:53 crc kubenswrapper[4923]: E0321 04:18:53.053788 4923 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 04:18:53 crc kubenswrapper[4923]: E0321 04:18:53.053843 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-21 04:18:57.053825444 +0000 UTC m=+102.206836571 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.118334 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.118362 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.118387 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.118400 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.118409 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:53Z","lastTransitionTime":"2026-03-21T04:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.221003 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.221068 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.221088 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.221115 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.221133 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:53Z","lastTransitionTime":"2026-03-21T04:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.323618 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.323669 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.323688 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.323713 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.323732 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:53Z","lastTransitionTime":"2026-03-21T04:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.427298 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.427389 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.427409 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.427432 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.427451 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:53Z","lastTransitionTime":"2026-03-21T04:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.530918 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.530976 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.530993 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.531018 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.531036 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:53Z","lastTransitionTime":"2026-03-21T04:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.634119 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.634164 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.634181 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.634206 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.634223 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:53Z","lastTransitionTime":"2026-03-21T04:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.737306 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.737423 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.737453 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.737486 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.737514 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:53Z","lastTransitionTime":"2026-03-21T04:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.840860 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.840908 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.840922 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.840941 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.840964 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:53Z","lastTransitionTime":"2026-03-21T04:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.944265 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.944361 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.944381 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.944409 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:53 crc kubenswrapper[4923]: I0321 04:18:53.944426 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:53Z","lastTransitionTime":"2026-03-21T04:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:54 crc kubenswrapper[4923]: I0321 04:18:54.047196 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:54 crc kubenswrapper[4923]: I0321 04:18:54.047267 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:54 crc kubenswrapper[4923]: I0321 04:18:54.047279 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:54 crc kubenswrapper[4923]: I0321 04:18:54.047305 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:54 crc kubenswrapper[4923]: I0321 04:18:54.047338 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:54Z","lastTransitionTime":"2026-03-21T04:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:54 crc kubenswrapper[4923]: I0321 04:18:54.150230 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:54 crc kubenswrapper[4923]: I0321 04:18:54.150306 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:54 crc kubenswrapper[4923]: I0321 04:18:54.150357 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:54 crc kubenswrapper[4923]: I0321 04:18:54.150386 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:54 crc kubenswrapper[4923]: I0321 04:18:54.150404 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:54Z","lastTransitionTime":"2026-03-21T04:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:54 crc kubenswrapper[4923]: I0321 04:18:54.220875 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 21 04:18:54 crc kubenswrapper[4923]: I0321 04:18:54.220944 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 21 04:18:54 crc kubenswrapper[4923]: I0321 04:18:54.220956 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 21 04:18:54 crc kubenswrapper[4923]: I0321 04:18:54.220980 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 21 04:18:54 crc kubenswrapper[4923]: I0321 04:18:54.220993 4923 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-21T04:18:54Z","lastTransitionTime":"2026-03-21T04:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 21 04:18:54 crc kubenswrapper[4923]: I0321 04:18:54.322296 4923 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 21 04:18:54 crc kubenswrapper[4923]: I0321 04:18:54.333314 4923 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 21 04:18:54 crc kubenswrapper[4923]: I0321 04:18:54.358377 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:18:54 crc kubenswrapper[4923]: I0321 04:18:54.358452 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:18:54 crc kubenswrapper[4923]: I0321 04:18:54.358509 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 04:18:54 crc kubenswrapper[4923]: E0321 04:18:54.358651 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 04:18:54 crc kubenswrapper[4923]: E0321 04:18:54.358701 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 04:18:54 crc kubenswrapper[4923]: E0321 04:18:54.358818 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 04:18:56 crc kubenswrapper[4923]: I0321 04:18:56.357853 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:18:56 crc kubenswrapper[4923]: E0321 04:18:56.358703 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 04:18:56 crc kubenswrapper[4923]: I0321 04:18:56.358909 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 04:18:56 crc kubenswrapper[4923]: I0321 04:18:56.358978 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:18:56 crc kubenswrapper[4923]: E0321 04:18:56.359150 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 04:18:56 crc kubenswrapper[4923]: E0321 04:18:56.359262 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 04:18:56 crc kubenswrapper[4923]: I0321 04:18:56.988263 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:18:56 crc kubenswrapper[4923]: I0321 04:18:56.988409 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:18:56 crc kubenswrapper[4923]: E0321 04:18:56.988454 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:04.988423763 +0000 UTC m=+110.141434880 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:18:56 crc kubenswrapper[4923]: I0321 04:18:56.988547 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:18:56 crc kubenswrapper[4923]: E0321 04:18:56.988564 4923 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 04:18:56 crc kubenswrapper[4923]: E0321 04:18:56.988627 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 04:19:04.988609449 +0000 UTC m=+110.141620576 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 04:18:56 crc kubenswrapper[4923]: E0321 04:18:56.988637 4923 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 04:18:56 crc kubenswrapper[4923]: E0321 04:18:56.988687 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 04:19:04.988672951 +0000 UTC m=+110.141684078 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 04:18:57 crc kubenswrapper[4923]: I0321 04:18:57.089847 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 04:18:57 crc kubenswrapper[4923]: I0321 04:18:57.089909 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:18:57 crc kubenswrapper[4923]: E0321 04:18:57.090044 4923 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 04:18:57 crc kubenswrapper[4923]: E0321 04:18:57.090073 4923 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 04:18:57 crc kubenswrapper[4923]: E0321 04:18:57.090089 4923 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 04:18:57 crc kubenswrapper[4923]: E0321 04:18:57.090147 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-21 04:19:05.090128916 +0000 UTC m=+110.243140013 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 04:18:57 crc kubenswrapper[4923]: E0321 04:18:57.090044 4923 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 04:18:57 crc kubenswrapper[4923]: E0321 04:18:57.090180 4923 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 04:18:57 crc kubenswrapper[4923]: E0321 04:18:57.090194 4923 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 04:18:57 crc kubenswrapper[4923]: E0321 04:18:57.090233 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-21 04:19:05.090221649 +0000 UTC m=+110.243232746 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 04:18:57 crc kubenswrapper[4923]: I0321 04:18:57.370828 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 21 04:18:57 crc kubenswrapper[4923]: I0321 04:18:57.371301 4923 scope.go:117] "RemoveContainer" containerID="255966e1808c105be31fb4f94cbc0d42750c3eed0dbb11cc0da41840834dbdec" Mar 21 04:18:57 crc kubenswrapper[4923]: I0321 04:18:57.788094 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 21 04:18:57 crc kubenswrapper[4923]: I0321 04:18:57.791449 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2b50fd31a4dc122bfbc5e25fa58276c94141fdadca20ae5c2c2a4a6b8926ef95"} Mar 21 04:18:57 crc kubenswrapper[4923]: I0321 04:18:57.791855 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:18:57 crc kubenswrapper[4923]: I0321 04:18:57.816057 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=0.816043546 podStartE2EDuration="816.043546ms" podCreationTimestamp="2026-03-21 04:18:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:18:57.814728826 +0000 UTC m=+102.967739923" watchObservedRunningTime="2026-03-21 04:18:57.816043546 +0000 UTC m=+102.969054643" Mar 21 04:18:58 crc kubenswrapper[4923]: I0321 04:18:58.357420 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 04:18:58 crc kubenswrapper[4923]: E0321 04:18:58.357919 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 04:18:58 crc kubenswrapper[4923]: I0321 04:18:58.357461 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:18:58 crc kubenswrapper[4923]: E0321 04:18:58.358217 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 04:18:58 crc kubenswrapper[4923]: I0321 04:18:58.357462 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:18:58 crc kubenswrapper[4923]: E0321 04:18:58.358548 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 04:19:00 crc kubenswrapper[4923]: I0321 04:19:00.357434 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:19:00 crc kubenswrapper[4923]: E0321 04:19:00.357618 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 04:19:00 crc kubenswrapper[4923]: I0321 04:19:00.357685 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:19:00 crc kubenswrapper[4923]: E0321 04:19:00.357866 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 04:19:00 crc kubenswrapper[4923]: I0321 04:19:00.357708 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 04:19:00 crc kubenswrapper[4923]: E0321 04:19:00.358020 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.358575 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.358688 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.358771 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:19:02 crc kubenswrapper[4923]: E0321 04:19:02.358840 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 04:19:02 crc kubenswrapper[4923]: E0321 04:19:02.358921 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 04:19:02 crc kubenswrapper[4923]: E0321 04:19:02.359007 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.379871 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.880804 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-hzdnw"] Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.881791 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hzdnw" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.884105 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.884549 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.885605 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.904213 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-cv5gr"] Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.904868 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-8x6p9"] Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.905737 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.905995 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-bxklc"] Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.906376 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bxklc" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.907025 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8x6p9" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.909935 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.910031 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.910268 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.910896 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.910592 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.910610 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.910675 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.910756 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.911372 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.911672 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.911865 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.915385 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.930665 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-df6ks"] Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.934884 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.939301 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.939700 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.939718 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.939893 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.943040 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.943221 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.944130 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 21 04:19:02 crc kubenswrapper[4923]: I0321 04:19:02.950900 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=0.950844174 podStartE2EDuration="950.844174ms" podCreationTimestamp="2026-03-21 04:19:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:02.947365499 +0000 UTC m=+108.100376586" watchObservedRunningTime="2026-03-21 04:19:02.950844174 +0000 UTC m=+108.103855301" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.012219 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjlxw"] Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.012820 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjlxw" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.015196 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.015671 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.015704 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.015975 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.046090 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-cni-bin\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.046190 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.046233 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-slash\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.046270 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-host-run-multus-certs\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.046302 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-etc-kubernetes\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.046373 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-run-openvswitch\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.046418 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-log-socket\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.046451 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-hostroot\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.046485 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2p6x\" (UniqueName: \"kubernetes.io/projected/825094bd-1f06-4f1d-9dce-9909700899ad-kube-api-access-q2p6x\") pod \"node-resolver-hzdnw\" (UID: \"825094bd-1f06-4f1d-9dce-9909700899ad\") " pod="openshift-dns/node-resolver-hzdnw" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.046513 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-run-systemd\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.046545 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/95d61f2a-3f56-4d98-a1df-384973815163-env-overrides\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.046576 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-os-release\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.046608 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-host-var-lib-cni-multus\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.046636 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e7e27c82-1749-4ac1-96e5-602ffc171726-cnibin\") pod \"multus-additional-cni-plugins-8x6p9\" (UID: \"e7e27c82-1749-4ac1-96e5-602ffc171726\") " pod="openshift-multus/multus-additional-cni-plugins-8x6p9" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.046664 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-cnibin\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.046692 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b3c415c9-5270-474d-9361-3df6701f2b3e-cni-binary-copy\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.046720 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b3c415c9-5270-474d-9361-3df6701f2b3e-multus-daemon-config\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.046748 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j4rg\" (UniqueName: \"kubernetes.io/projected/e7e27c82-1749-4ac1-96e5-602ffc171726-kube-api-access-2j4rg\") pod \"multus-additional-cni-plugins-8x6p9\" (UID: \"e7e27c82-1749-4ac1-96e5-602ffc171726\") " pod="openshift-multus/multus-additional-cni-plugins-8x6p9" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.046781 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ndfc\" (UniqueName: \"kubernetes.io/projected/34cdf206-b121-415c-ae40-21245192e724-kube-api-access-2ndfc\") pod \"machine-config-daemon-cv5gr\" (UID: \"34cdf206-b121-415c-ae40-21245192e724\") " pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.046810 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddrwh\" (UniqueName: \"kubernetes.io/projected/b3c415c9-5270-474d-9361-3df6701f2b3e-kube-api-access-ddrwh\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.046839 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk9cv\" (UniqueName: \"kubernetes.io/projected/95d61f2a-3f56-4d98-a1df-384973815163-kube-api-access-dk9cv\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.046867 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-kubelet\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.046896 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-run-ovn\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.046923 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-run-ovn-kubernetes\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.046952 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-system-cni-dir\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.046981 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-run-netns\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.047012 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-systemd-units\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.047052 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-cni-netd\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.047095 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/95d61f2a-3f56-4d98-a1df-384973815163-ovnkube-config\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.047135 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e7e27c82-1749-4ac1-96e5-602ffc171726-os-release\") pod \"multus-additional-cni-plugins-8x6p9\" (UID: \"e7e27c82-1749-4ac1-96e5-602ffc171726\") " pod="openshift-multus/multus-additional-cni-plugins-8x6p9" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.047241 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7e27c82-1749-4ac1-96e5-602ffc171726-system-cni-dir\") pod \"multus-additional-cni-plugins-8x6p9\" (UID: \"e7e27c82-1749-4ac1-96e5-602ffc171726\") " pod="openshift-multus/multus-additional-cni-plugins-8x6p9" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.047280 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-host-run-k8s-cni-cncf-io\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.047307 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e7e27c82-1749-4ac1-96e5-602ffc171726-cni-binary-copy\") pod \"multus-additional-cni-plugins-8x6p9\" (UID: \"e7e27c82-1749-4ac1-96e5-602ffc171726\") " pod="openshift-multus/multus-additional-cni-plugins-8x6p9" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.047360 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-multus-socket-dir-parent\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.047404 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-host-run-netns\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.047429 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-host-var-lib-cni-bin\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.047451 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e7e27c82-1749-4ac1-96e5-602ffc171726-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8x6p9\" (UID: \"e7e27c82-1749-4ac1-96e5-602ffc171726\") " pod="openshift-multus/multus-additional-cni-plugins-8x6p9" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.047470 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-etc-openvswitch\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.047490 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-multus-cni-dir\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.047509 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/825094bd-1f06-4f1d-9dce-9909700899ad-hosts-file\") pod \"node-resolver-hzdnw\" (UID: \"825094bd-1f06-4f1d-9dce-9909700899ad\") " pod="openshift-dns/node-resolver-hzdnw" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.047538 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/34cdf206-b121-415c-ae40-21245192e724-proxy-tls\") pod \"machine-config-daemon-cv5gr\" (UID: \"34cdf206-b121-415c-ae40-21245192e724\") " pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.047557 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-var-lib-openvswitch\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.047578 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-node-log\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.047607 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e7e27c82-1749-4ac1-96e5-602ffc171726-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8x6p9\" (UID: \"e7e27c82-1749-4ac1-96e5-602ffc171726\") " pod="openshift-multus/multus-additional-cni-plugins-8x6p9" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.047628 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/34cdf206-b121-415c-ae40-21245192e724-mcd-auth-proxy-config\") pod \"machine-config-daemon-cv5gr\" (UID: \"34cdf206-b121-415c-ae40-21245192e724\") " pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.047648 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/95d61f2a-3f56-4d98-a1df-384973815163-ovn-node-metrics-cert\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.047669 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/95d61f2a-3f56-4d98-a1df-384973815163-ovnkube-script-lib\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.047690 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-host-var-lib-kubelet\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.047711 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-multus-conf-dir\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.047733 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/34cdf206-b121-415c-ae40-21245192e724-rootfs\") pod \"machine-config-daemon-cv5gr\" (UID: \"34cdf206-b121-415c-ae40-21245192e724\") " pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.104403 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-4fx6w"] Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.104769 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4fx6w" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.107764 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.108642 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.109023 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.110127 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.149303 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-host-run-multus-certs\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.149405 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-etc-kubernetes\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.149454 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-slash\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.149498 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-run-openvswitch\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.149639 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-etc-kubernetes\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.149706 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-log-socket\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.149673 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-log-socket\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.149610 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-run-openvswitch\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.149613 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-slash\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.149825 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2p6x\" (UniqueName: \"kubernetes.io/projected/825094bd-1f06-4f1d-9dce-9909700899ad-kube-api-access-q2p6x\") pod \"node-resolver-hzdnw\" (UID: \"825094bd-1f06-4f1d-9dce-9909700899ad\") " pod="openshift-dns/node-resolver-hzdnw" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.149946 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-run-systemd\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.150053 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-run-systemd\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.150021 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/95d61f2a-3f56-4d98-a1df-384973815163-env-overrides\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.150138 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-os-release\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.150171 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-host-var-lib-cni-multus\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.150203 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-hostroot\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.150239 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e7e27c82-1749-4ac1-96e5-602ffc171726-cnibin\") pod \"multus-additional-cni-plugins-8x6p9\" (UID: \"e7e27c82-1749-4ac1-96e5-602ffc171726\") " pod="openshift-multus/multus-additional-cni-plugins-8x6p9" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.150274 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-cnibin\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.150276 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-host-var-lib-cni-multus\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.150304 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b3c415c9-5270-474d-9361-3df6701f2b3e-cni-binary-copy\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.150310 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-os-release\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.150364 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b3c415c9-5270-474d-9361-3df6701f2b3e-multus-daemon-config\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.150363 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-hostroot\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.150421 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j4rg\" (UniqueName: \"kubernetes.io/projected/e7e27c82-1749-4ac1-96e5-602ffc171726-kube-api-access-2j4rg\") pod \"multus-additional-cni-plugins-8x6p9\" (UID: \"e7e27c82-1749-4ac1-96e5-602ffc171726\") " pod="openshift-multus/multus-additional-cni-plugins-8x6p9" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.150421 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e7e27c82-1749-4ac1-96e5-602ffc171726-cnibin\") pod \"multus-additional-cni-plugins-8x6p9\" (UID: \"e7e27c82-1749-4ac1-96e5-602ffc171726\") " pod="openshift-multus/multus-additional-cni-plugins-8x6p9" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.150462 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ndfc\" (UniqueName: \"kubernetes.io/projected/34cdf206-b121-415c-ae40-21245192e724-kube-api-access-2ndfc\") pod \"machine-config-daemon-cv5gr\" (UID: \"34cdf206-b121-415c-ae40-21245192e724\") " pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.150446 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-cnibin\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.150530 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddrwh\" (UniqueName: \"kubernetes.io/projected/b3c415c9-5270-474d-9361-3df6701f2b3e-kube-api-access-ddrwh\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.150570 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk9cv\" (UniqueName: \"kubernetes.io/projected/95d61f2a-3f56-4d98-a1df-384973815163-kube-api-access-dk9cv\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.150612 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8b9b070-635c-4afe-94b0-e1f1a9e01888-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xjlxw\" (UID: \"c8b9b070-635c-4afe-94b0-e1f1a9e01888\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjlxw" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.150647 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c8b9b070-635c-4afe-94b0-e1f1a9e01888-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xjlxw\" (UID: \"c8b9b070-635c-4afe-94b0-e1f1a9e01888\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjlxw" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.150680 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-kubelet\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.150731 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-kubelet\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.150741 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-run-ovn\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.150857 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-run-ovn-kubernetes\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.150868 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-run-ovn\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.150886 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-run-ovn-kubernetes\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.150925 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-system-cni-dir\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.150964 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-run-netns\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151003 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-cni-netd\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151029 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-run-netns\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151034 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/95d61f2a-3f56-4d98-a1df-384973815163-ovnkube-config\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151093 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e7e27c82-1749-4ac1-96e5-602ffc171726-os-release\") pod \"multus-additional-cni-plugins-8x6p9\" (UID: \"e7e27c82-1749-4ac1-96e5-602ffc171726\") " pod="openshift-multus/multus-additional-cni-plugins-8x6p9" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151150 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-systemd-units\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151148 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-cni-netd\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151140 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-system-cni-dir\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151187 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-host-run-k8s-cni-cncf-io\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151212 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-host-run-k8s-cni-cncf-io\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151214 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-systemd-units\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151199 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e7e27c82-1749-4ac1-96e5-602ffc171726-os-release\") pod \"multus-additional-cni-plugins-8x6p9\" (UID: \"e7e27c82-1749-4ac1-96e5-602ffc171726\") " pod="openshift-multus/multus-additional-cni-plugins-8x6p9" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151240 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7e27c82-1749-4ac1-96e5-602ffc171726-system-cni-dir\") pod \"multus-additional-cni-plugins-8x6p9\" (UID: \"e7e27c82-1749-4ac1-96e5-602ffc171726\") " pod="openshift-multus/multus-additional-cni-plugins-8x6p9" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151279 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7e27c82-1749-4ac1-96e5-602ffc171726-system-cni-dir\") pod \"multus-additional-cni-plugins-8x6p9\" (UID: \"e7e27c82-1749-4ac1-96e5-602ffc171726\") " pod="openshift-multus/multus-additional-cni-plugins-8x6p9" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151379 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e7e27c82-1749-4ac1-96e5-602ffc171726-cni-binary-copy\") pod \"multus-additional-cni-plugins-8x6p9\" (UID: \"e7e27c82-1749-4ac1-96e5-602ffc171726\") " pod="openshift-multus/multus-additional-cni-plugins-8x6p9" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151429 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c8b9b070-635c-4afe-94b0-e1f1a9e01888-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xjlxw\" (UID: \"c8b9b070-635c-4afe-94b0-e1f1a9e01888\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjlxw" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151469 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-host-run-multus-certs\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151476 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-multus-socket-dir-parent\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151539 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-multus-socket-dir-parent\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151543 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-host-run-netns\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151579 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-host-run-netns\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151599 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-host-var-lib-cni-bin\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151636 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-host-var-lib-cni-bin\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151641 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e7e27c82-1749-4ac1-96e5-602ffc171726-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8x6p9\" (UID: \"e7e27c82-1749-4ac1-96e5-602ffc171726\") " pod="openshift-multus/multus-additional-cni-plugins-8x6p9" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151692 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c8b9b070-635c-4afe-94b0-e1f1a9e01888-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xjlxw\" (UID: \"c8b9b070-635c-4afe-94b0-e1f1a9e01888\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjlxw" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151736 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-multus-cni-dir\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151768 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/825094bd-1f06-4f1d-9dce-9909700899ad-hosts-file\") pod \"node-resolver-hzdnw\" (UID: \"825094bd-1f06-4f1d-9dce-9909700899ad\") " pod="openshift-dns/node-resolver-hzdnw" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151804 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-etc-openvswitch\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151835 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/34cdf206-b121-415c-ae40-21245192e724-proxy-tls\") pod \"machine-config-daemon-cv5gr\" (UID: \"34cdf206-b121-415c-ae40-21245192e724\") " pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151864 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-var-lib-openvswitch\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151897 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-node-log\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.151970 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e7e27c82-1749-4ac1-96e5-602ffc171726-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8x6p9\" (UID: \"e7e27c82-1749-4ac1-96e5-602ffc171726\") " pod="openshift-multus/multus-additional-cni-plugins-8x6p9" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.152006 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/34cdf206-b121-415c-ae40-21245192e724-mcd-auth-proxy-config\") pod \"machine-config-daemon-cv5gr\" (UID: \"34cdf206-b121-415c-ae40-21245192e724\") " pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.152042 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/95d61f2a-3f56-4d98-a1df-384973815163-ovn-node-metrics-cert\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.152042 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b3c415c9-5270-474d-9361-3df6701f2b3e-multus-daemon-config\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.152073 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/95d61f2a-3f56-4d98-a1df-384973815163-ovnkube-script-lib\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.152048 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b3c415c9-5270-474d-9361-3df6701f2b3e-cni-binary-copy\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.152111 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8b9b070-635c-4afe-94b0-e1f1a9e01888-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xjlxw\" (UID: \"c8b9b070-635c-4afe-94b0-e1f1a9e01888\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjlxw" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.152148 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-multus-conf-dir\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.152164 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/95d61f2a-3f56-4d98-a1df-384973815163-ovnkube-config\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.152187 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/34cdf206-b121-415c-ae40-21245192e724-rootfs\") pod \"machine-config-daemon-cv5gr\" (UID: \"34cdf206-b121-415c-ae40-21245192e724\") " pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.152224 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-multus-cni-dir\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.152261 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-host-var-lib-kubelet\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.152310 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-host-var-lib-kubelet\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.152376 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-cni-bin\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.152434 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.152566 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.152645 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-cni-bin\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.152700 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-node-log\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.152757 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-var-lib-openvswitch\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.152755 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e7e27c82-1749-4ac1-96e5-602ffc171726-cni-binary-copy\") pod \"multus-additional-cni-plugins-8x6p9\" (UID: \"e7e27c82-1749-4ac1-96e5-602ffc171726\") " pod="openshift-multus/multus-additional-cni-plugins-8x6p9" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.152802 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e7e27c82-1749-4ac1-96e5-602ffc171726-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8x6p9\" (UID: \"e7e27c82-1749-4ac1-96e5-602ffc171726\") " pod="openshift-multus/multus-additional-cni-plugins-8x6p9" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.152825 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/825094bd-1f06-4f1d-9dce-9909700899ad-hosts-file\") pod \"node-resolver-hzdnw\" (UID: \"825094bd-1f06-4f1d-9dce-9909700899ad\") " pod="openshift-dns/node-resolver-hzdnw" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.152879 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b3c415c9-5270-474d-9361-3df6701f2b3e-multus-conf-dir\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.152889 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/34cdf206-b121-415c-ae40-21245192e724-rootfs\") pod \"machine-config-daemon-cv5gr\" (UID: \"34cdf206-b121-415c-ae40-21245192e724\") " pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.152941 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-etc-openvswitch\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.153088 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/34cdf206-b121-415c-ae40-21245192e724-mcd-auth-proxy-config\") pod \"machine-config-daemon-cv5gr\" (UID: \"34cdf206-b121-415c-ae40-21245192e724\") " pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.153576 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/95d61f2a-3f56-4d98-a1df-384973815163-env-overrides\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.153738 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e7e27c82-1749-4ac1-96e5-602ffc171726-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8x6p9\" (UID: \"e7e27c82-1749-4ac1-96e5-602ffc171726\") " pod="openshift-multus/multus-additional-cni-plugins-8x6p9" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.153995 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/95d61f2a-3f56-4d98-a1df-384973815163-ovnkube-script-lib\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.160527 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/95d61f2a-3f56-4d98-a1df-384973815163-ovn-node-metrics-cert\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.171375 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/34cdf206-b121-415c-ae40-21245192e724-proxy-tls\") pod \"machine-config-daemon-cv5gr\" (UID: \"34cdf206-b121-415c-ae40-21245192e724\") " pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.173059 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j4rg\" (UniqueName: \"kubernetes.io/projected/e7e27c82-1749-4ac1-96e5-602ffc171726-kube-api-access-2j4rg\") pod \"multus-additional-cni-plugins-8x6p9\" (UID: \"e7e27c82-1749-4ac1-96e5-602ffc171726\") " pod="openshift-multus/multus-additional-cni-plugins-8x6p9" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.174370 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddrwh\" (UniqueName: \"kubernetes.io/projected/b3c415c9-5270-474d-9361-3df6701f2b3e-kube-api-access-ddrwh\") pod \"multus-bxklc\" (UID: \"b3c415c9-5270-474d-9361-3df6701f2b3e\") " pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.174855 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2p6x\" (UniqueName: \"kubernetes.io/projected/825094bd-1f06-4f1d-9dce-9909700899ad-kube-api-access-q2p6x\") pod \"node-resolver-hzdnw\" (UID: \"825094bd-1f06-4f1d-9dce-9909700899ad\") " pod="openshift-dns/node-resolver-hzdnw" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.180453 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk9cv\" (UniqueName: \"kubernetes.io/projected/95d61f2a-3f56-4d98-a1df-384973815163-kube-api-access-dk9cv\") pod \"ovnkube-node-df6ks\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.181255 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ndfc\" (UniqueName: \"kubernetes.io/projected/34cdf206-b121-415c-ae40-21245192e724-kube-api-access-2ndfc\") pod \"machine-config-daemon-cv5gr\" (UID: \"34cdf206-b121-415c-ae40-21245192e724\") " pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.211806 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hzdnw" Mar 21 04:19:03 crc kubenswrapper[4923]: W0321 04:19:03.228598 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod825094bd_1f06_4f1d_9dce_9909700899ad.slice/crio-b02895e42187e7a95d53b9109c596d5ec1e47d42a919c6851443482a2dba294c WatchSource:0}: Error finding container b02895e42187e7a95d53b9109c596d5ec1e47d42a919c6851443482a2dba294c: Status 404 returned error can't find the container with id b02895e42187e7a95d53b9109c596d5ec1e47d42a919c6851443482a2dba294c Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.234757 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.247222 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bxklc" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.253029 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c8b9b070-635c-4afe-94b0-e1f1a9e01888-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xjlxw\" (UID: \"c8b9b070-635c-4afe-94b0-e1f1a9e01888\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjlxw" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.253074 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c8b9b070-635c-4afe-94b0-e1f1a9e01888-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xjlxw\" (UID: \"c8b9b070-635c-4afe-94b0-e1f1a9e01888\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjlxw" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.253130 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8b9b070-635c-4afe-94b0-e1f1a9e01888-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xjlxw\" (UID: \"c8b9b070-635c-4afe-94b0-e1f1a9e01888\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjlxw" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.253188 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47b54445-1415-4a43-8b54-e36def4bedba-host\") pod \"node-ca-4fx6w\" (UID: \"47b54445-1415-4a43-8b54-e36def4bedba\") " pod="openshift-image-registry/node-ca-4fx6w" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.253210 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/47b54445-1415-4a43-8b54-e36def4bedba-serviceca\") pod \"node-ca-4fx6w\" (UID: \"47b54445-1415-4a43-8b54-e36def4bedba\") " pod="openshift-image-registry/node-ca-4fx6w" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.253233 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8b9b070-635c-4afe-94b0-e1f1a9e01888-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xjlxw\" (UID: \"c8b9b070-635c-4afe-94b0-e1f1a9e01888\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjlxw" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.253256 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c8b9b070-635c-4afe-94b0-e1f1a9e01888-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xjlxw\" (UID: \"c8b9b070-635c-4afe-94b0-e1f1a9e01888\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjlxw" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.253278 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjhmg\" (UniqueName: \"kubernetes.io/projected/47b54445-1415-4a43-8b54-e36def4bedba-kube-api-access-vjhmg\") pod \"node-ca-4fx6w\" (UID: \"47b54445-1415-4a43-8b54-e36def4bedba\") " pod="openshift-image-registry/node-ca-4fx6w" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.253395 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c8b9b070-635c-4afe-94b0-e1f1a9e01888-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xjlxw\" (UID: \"c8b9b070-635c-4afe-94b0-e1f1a9e01888\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjlxw" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.253435 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c8b9b070-635c-4afe-94b0-e1f1a9e01888-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xjlxw\" (UID: \"c8b9b070-635c-4afe-94b0-e1f1a9e01888\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjlxw" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.254827 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c8b9b070-635c-4afe-94b0-e1f1a9e01888-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xjlxw\" (UID: \"c8b9b070-635c-4afe-94b0-e1f1a9e01888\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjlxw" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.262251 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8b9b070-635c-4afe-94b0-e1f1a9e01888-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xjlxw\" (UID: \"c8b9b070-635c-4afe-94b0-e1f1a9e01888\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjlxw" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.264277 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.269025 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8x6p9" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.285042 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8b9b070-635c-4afe-94b0-e1f1a9e01888-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xjlxw\" (UID: \"c8b9b070-635c-4afe-94b0-e1f1a9e01888\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjlxw" Mar 21 04:19:03 crc kubenswrapper[4923]: W0321 04:19:03.288531 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95d61f2a_3f56_4d98_a1df_384973815163.slice/crio-326b4a5a50855478d5e1c8c394e8a09c035423c177b683af529da1d6145ee7b5 WatchSource:0}: Error finding container 326b4a5a50855478d5e1c8c394e8a09c035423c177b683af529da1d6145ee7b5: Status 404 returned error can't find the container with id 326b4a5a50855478d5e1c8c394e8a09c035423c177b683af529da1d6145ee7b5 Mar 21 04:19:03 crc kubenswrapper[4923]: W0321 04:19:03.310025 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7e27c82_1749_4ac1_96e5_602ffc171726.slice/crio-012ca771873da480348276bd3ba184be2f2fb0049b3afa8d5db67dea0f131e5c WatchSource:0}: Error finding container 012ca771873da480348276bd3ba184be2f2fb0049b3afa8d5db67dea0f131e5c: Status 404 returned error can't find the container with id 012ca771873da480348276bd3ba184be2f2fb0049b3afa8d5db67dea0f131e5c Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.325540 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjlxw" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.336179 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z8nrn"] Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.336639 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z8nrn" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.345004 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.345202 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.355117 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47b54445-1415-4a43-8b54-e36def4bedba-host\") pod \"node-ca-4fx6w\" (UID: \"47b54445-1415-4a43-8b54-e36def4bedba\") " pod="openshift-image-registry/node-ca-4fx6w" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.355164 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/47b54445-1415-4a43-8b54-e36def4bedba-serviceca\") pod \"node-ca-4fx6w\" (UID: \"47b54445-1415-4a43-8b54-e36def4bedba\") " pod="openshift-image-registry/node-ca-4fx6w" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.355185 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjhmg\" (UniqueName: \"kubernetes.io/projected/47b54445-1415-4a43-8b54-e36def4bedba-kube-api-access-vjhmg\") pod \"node-ca-4fx6w\" (UID: \"47b54445-1415-4a43-8b54-e36def4bedba\") " pod="openshift-image-registry/node-ca-4fx6w" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.355297 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47b54445-1415-4a43-8b54-e36def4bedba-host\") pod \"node-ca-4fx6w\" (UID: \"47b54445-1415-4a43-8b54-e36def4bedba\") " pod="openshift-image-registry/node-ca-4fx6w" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.356573 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/47b54445-1415-4a43-8b54-e36def4bedba-serviceca\") pod \"node-ca-4fx6w\" (UID: \"47b54445-1415-4a43-8b54-e36def4bedba\") " pod="openshift-image-registry/node-ca-4fx6w" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.357576 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-rxwzv"] Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.358236 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rxwzv" Mar 21 04:19:03 crc kubenswrapper[4923]: E0321 04:19:03.358332 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rxwzv" podUID="c807a2c9-347b-412f-ae48-0a1d03fefa10" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.377108 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjhmg\" (UniqueName: \"kubernetes.io/projected/47b54445-1415-4a43-8b54-e36def4bedba-kube-api-access-vjhmg\") pod \"node-ca-4fx6w\" (UID: \"47b54445-1415-4a43-8b54-e36def4bedba\") " pod="openshift-image-registry/node-ca-4fx6w" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.420568 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4fx6w" Mar 21 04:19:03 crc kubenswrapper[4923]: W0321 04:19:03.436937 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47b54445_1415_4a43_8b54_e36def4bedba.slice/crio-aa2c8ac88b0a7be2ac87c6750ca52984be5523b7c174e663b7d9a284c521ced4 WatchSource:0}: Error finding container aa2c8ac88b0a7be2ac87c6750ca52984be5523b7c174e663b7d9a284c521ced4: Status 404 returned error can't find the container with id aa2c8ac88b0a7be2ac87c6750ca52984be5523b7c174e663b7d9a284c521ced4 Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.455716 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxkg4\" (UniqueName: \"kubernetes.io/projected/427f97c8-8d28-4a22-8e41-6925432fe493-kube-api-access-xxkg4\") pod \"ovnkube-control-plane-749d76644c-z8nrn\" (UID: \"427f97c8-8d28-4a22-8e41-6925432fe493\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z8nrn" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.455758 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/427f97c8-8d28-4a22-8e41-6925432fe493-env-overrides\") pod \"ovnkube-control-plane-749d76644c-z8nrn\" (UID: \"427f97c8-8d28-4a22-8e41-6925432fe493\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z8nrn" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.455780 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfhjq\" (UniqueName: \"kubernetes.io/projected/c807a2c9-347b-412f-ae48-0a1d03fefa10-kube-api-access-tfhjq\") pod \"network-metrics-daemon-rxwzv\" (UID: \"c807a2c9-347b-412f-ae48-0a1d03fefa10\") " pod="openshift-multus/network-metrics-daemon-rxwzv" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.455882 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/427f97c8-8d28-4a22-8e41-6925432fe493-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-z8nrn\" (UID: \"427f97c8-8d28-4a22-8e41-6925432fe493\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z8nrn" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.455958 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/427f97c8-8d28-4a22-8e41-6925432fe493-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-z8nrn\" (UID: \"427f97c8-8d28-4a22-8e41-6925432fe493\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z8nrn" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.455983 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c807a2c9-347b-412f-ae48-0a1d03fefa10-metrics-certs\") pod \"network-metrics-daemon-rxwzv\" (UID: \"c807a2c9-347b-412f-ae48-0a1d03fefa10\") " pod="openshift-multus/network-metrics-daemon-rxwzv" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.556835 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/427f97c8-8d28-4a22-8e41-6925432fe493-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-z8nrn\" (UID: \"427f97c8-8d28-4a22-8e41-6925432fe493\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z8nrn" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.556901 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c807a2c9-347b-412f-ae48-0a1d03fefa10-metrics-certs\") pod \"network-metrics-daemon-rxwzv\" (UID: \"c807a2c9-347b-412f-ae48-0a1d03fefa10\") " pod="openshift-multus/network-metrics-daemon-rxwzv" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.556937 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/427f97c8-8d28-4a22-8e41-6925432fe493-env-overrides\") pod \"ovnkube-control-plane-749d76644c-z8nrn\" (UID: \"427f97c8-8d28-4a22-8e41-6925432fe493\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z8nrn" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.556967 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxkg4\" (UniqueName: \"kubernetes.io/projected/427f97c8-8d28-4a22-8e41-6925432fe493-kube-api-access-xxkg4\") pod \"ovnkube-control-plane-749d76644c-z8nrn\" (UID: \"427f97c8-8d28-4a22-8e41-6925432fe493\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z8nrn" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.556996 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfhjq\" (UniqueName: \"kubernetes.io/projected/c807a2c9-347b-412f-ae48-0a1d03fefa10-kube-api-access-tfhjq\") pod \"network-metrics-daemon-rxwzv\" (UID: \"c807a2c9-347b-412f-ae48-0a1d03fefa10\") " pod="openshift-multus/network-metrics-daemon-rxwzv" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.557051 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/427f97c8-8d28-4a22-8e41-6925432fe493-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-z8nrn\" (UID: \"427f97c8-8d28-4a22-8e41-6925432fe493\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z8nrn" Mar 21 04:19:03 crc kubenswrapper[4923]: E0321 04:19:03.557097 4923 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 04:19:03 crc kubenswrapper[4923]: E0321 04:19:03.557199 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c807a2c9-347b-412f-ae48-0a1d03fefa10-metrics-certs podName:c807a2c9-347b-412f-ae48-0a1d03fefa10 nodeName:}" failed. No retries permitted until 2026-03-21 04:19:04.057174094 +0000 UTC m=+109.210185191 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c807a2c9-347b-412f-ae48-0a1d03fefa10-metrics-certs") pod "network-metrics-daemon-rxwzv" (UID: "c807a2c9-347b-412f-ae48-0a1d03fefa10") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.558176 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/427f97c8-8d28-4a22-8e41-6925432fe493-env-overrides\") pod \"ovnkube-control-plane-749d76644c-z8nrn\" (UID: \"427f97c8-8d28-4a22-8e41-6925432fe493\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z8nrn" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.559528 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/427f97c8-8d28-4a22-8e41-6925432fe493-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-z8nrn\" (UID: \"427f97c8-8d28-4a22-8e41-6925432fe493\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z8nrn" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.562588 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/427f97c8-8d28-4a22-8e41-6925432fe493-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-z8nrn\" (UID: \"427f97c8-8d28-4a22-8e41-6925432fe493\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z8nrn" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.577617 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfhjq\" (UniqueName: \"kubernetes.io/projected/c807a2c9-347b-412f-ae48-0a1d03fefa10-kube-api-access-tfhjq\") pod \"network-metrics-daemon-rxwzv\" (UID: \"c807a2c9-347b-412f-ae48-0a1d03fefa10\") " pod="openshift-multus/network-metrics-daemon-rxwzv" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.578298 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxkg4\" (UniqueName: \"kubernetes.io/projected/427f97c8-8d28-4a22-8e41-6925432fe493-kube-api-access-xxkg4\") pod \"ovnkube-control-plane-749d76644c-z8nrn\" (UID: \"427f97c8-8d28-4a22-8e41-6925432fe493\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z8nrn" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.687416 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z8nrn" Mar 21 04:19:03 crc kubenswrapper[4923]: W0321 04:19:03.703484 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod427f97c8_8d28_4a22_8e41_6925432fe493.slice/crio-f96cae3e9632418c8eb1a2eb164d518fcd3ae347d53f1faf51af2c4ba40d0c58 WatchSource:0}: Error finding container f96cae3e9632418c8eb1a2eb164d518fcd3ae347d53f1faf51af2c4ba40d0c58: Status 404 returned error can't find the container with id f96cae3e9632418c8eb1a2eb164d518fcd3ae347d53f1faf51af2c4ba40d0c58 Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.813261 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8x6p9" event={"ID":"e7e27c82-1749-4ac1-96e5-602ffc171726","Type":"ContainerStarted","Data":"f293a635d53b8e5a1d17a3c4bc4844d33fb32dd4cabb1648ff6b977b1ac9373e"} Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.813376 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8x6p9" event={"ID":"e7e27c82-1749-4ac1-96e5-602ffc171726","Type":"ContainerStarted","Data":"012ca771873da480348276bd3ba184be2f2fb0049b3afa8d5db67dea0f131e5c"} Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.817538 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hzdnw" event={"ID":"825094bd-1f06-4f1d-9dce-9909700899ad","Type":"ContainerStarted","Data":"a88561f113d6ba577fa34796492224f2f489397428a2d33efcb8e3cb925e5772"} Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.817571 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hzdnw" event={"ID":"825094bd-1f06-4f1d-9dce-9909700899ad","Type":"ContainerStarted","Data":"b02895e42187e7a95d53b9109c596d5ec1e47d42a919c6851443482a2dba294c"} Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.820953 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z8nrn" event={"ID":"427f97c8-8d28-4a22-8e41-6925432fe493","Type":"ContainerStarted","Data":"f96cae3e9632418c8eb1a2eb164d518fcd3ae347d53f1faf51af2c4ba40d0c58"} Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.828346 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bxklc" event={"ID":"b3c415c9-5270-474d-9361-3df6701f2b3e","Type":"ContainerStarted","Data":"d584ef4cfd06291da7aa4f0627f6a16a9fbd3891ede970538bc80bbc32dc8311"} Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.828408 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bxklc" event={"ID":"b3c415c9-5270-474d-9361-3df6701f2b3e","Type":"ContainerStarted","Data":"6e742e284d19146c7c041e4eb1f14d30da2d45e5b77434c8a4c9fc0189173b58"} Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.833821 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-hzdnw" podStartSLOduration=36.83377658 podStartE2EDuration="36.83377658s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:03.833139341 +0000 UTC m=+108.986150428" watchObservedRunningTime="2026-03-21 04:19:03.83377658 +0000 UTC m=+108.986787677" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.835512 4923 generic.go:334] "Generic (PLEG): container finished" podID="95d61f2a-3f56-4d98-a1df-384973815163" containerID="6ae4fdd694156a19a6d068ca04fe35f38352c3323a4b8bcfceac06e5be8879b3" exitCode=0 Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.835638 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" event={"ID":"95d61f2a-3f56-4d98-a1df-384973815163","Type":"ContainerDied","Data":"6ae4fdd694156a19a6d068ca04fe35f38352c3323a4b8bcfceac06e5be8879b3"} Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.835737 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" event={"ID":"95d61f2a-3f56-4d98-a1df-384973815163","Type":"ContainerStarted","Data":"326b4a5a50855478d5e1c8c394e8a09c035423c177b683af529da1d6145ee7b5"} Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.839194 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4fx6w" event={"ID":"47b54445-1415-4a43-8b54-e36def4bedba","Type":"ContainerStarted","Data":"c9868d5bd3cd568cb629b0f341f7520425fc34f9aff95aa28c86a095ffa8cfd9"} Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.839237 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4fx6w" event={"ID":"47b54445-1415-4a43-8b54-e36def4bedba","Type":"ContainerStarted","Data":"aa2c8ac88b0a7be2ac87c6750ca52984be5523b7c174e663b7d9a284c521ced4"} Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.841317 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjlxw" event={"ID":"c8b9b070-635c-4afe-94b0-e1f1a9e01888","Type":"ContainerStarted","Data":"14fee4bd05d68fd1f5f7ae20cea59ec616cef1e04e164d539260880d9054e1be"} Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.841369 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjlxw" event={"ID":"c8b9b070-635c-4afe-94b0-e1f1a9e01888","Type":"ContainerStarted","Data":"83bfce26434f6caa57dc3d3187e1e4424ceae9558075381940c1f41bb49a1a15"} Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.844379 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" event={"ID":"34cdf206-b121-415c-ae40-21245192e724","Type":"ContainerStarted","Data":"44aabe674a8d594fc4f9d2d86764faf06358d9116a58c5d5733fa4cd04602dd3"} Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.844431 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" event={"ID":"34cdf206-b121-415c-ae40-21245192e724","Type":"ContainerStarted","Data":"76f6e7c342bff59611eaa01ba2ed57f1a12fe75dab0e15a48267ea824f439b6f"} Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.844446 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" event={"ID":"34cdf206-b121-415c-ae40-21245192e724","Type":"ContainerStarted","Data":"d5fbd2c1260138b8c408fb06e6bf29ecf9aac7cd5d647f69501d8a9d3ce6fd54"} Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.887266 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-bxklc" podStartSLOduration=36.88724777 podStartE2EDuration="36.88724777s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:03.853454709 +0000 UTC m=+109.006465796" watchObservedRunningTime="2026-03-21 04:19:03.88724777 +0000 UTC m=+109.040258857" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.902648 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podStartSLOduration=36.90263106 podStartE2EDuration="36.90263106s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:03.901676642 +0000 UTC m=+109.054687749" watchObservedRunningTime="2026-03-21 04:19:03.90263106 +0000 UTC m=+109.055642147" Mar 21 04:19:03 crc kubenswrapper[4923]: I0321 04:19:03.914669 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xjlxw" podStartSLOduration=36.91464606 podStartE2EDuration="36.91464606s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:03.914014021 +0000 UTC m=+109.067025108" watchObservedRunningTime="2026-03-21 04:19:03.91464606 +0000 UTC m=+109.067657157" Mar 21 04:19:04 crc kubenswrapper[4923]: I0321 04:19:04.061705 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c807a2c9-347b-412f-ae48-0a1d03fefa10-metrics-certs\") pod \"network-metrics-daemon-rxwzv\" (UID: \"c807a2c9-347b-412f-ae48-0a1d03fefa10\") " pod="openshift-multus/network-metrics-daemon-rxwzv" Mar 21 04:19:04 crc kubenswrapper[4923]: E0321 04:19:04.061913 4923 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 04:19:04 crc kubenswrapper[4923]: E0321 04:19:04.062044 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c807a2c9-347b-412f-ae48-0a1d03fefa10-metrics-certs podName:c807a2c9-347b-412f-ae48-0a1d03fefa10 nodeName:}" failed. No retries permitted until 2026-03-21 04:19:05.062014919 +0000 UTC m=+110.215026036 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c807a2c9-347b-412f-ae48-0a1d03fefa10-metrics-certs") pod "network-metrics-daemon-rxwzv" (UID: "c807a2c9-347b-412f-ae48-0a1d03fefa10") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 04:19:04 crc kubenswrapper[4923]: I0321 04:19:04.358416 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:19:04 crc kubenswrapper[4923]: I0321 04:19:04.358441 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 04:19:04 crc kubenswrapper[4923]: I0321 04:19:04.358559 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:19:04 crc kubenswrapper[4923]: E0321 04:19:04.358701 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 04:19:04 crc kubenswrapper[4923]: E0321 04:19:04.358564 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 04:19:04 crc kubenswrapper[4923]: E0321 04:19:04.358826 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 04:19:04 crc kubenswrapper[4923]: I0321 04:19:04.851234 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" event={"ID":"95d61f2a-3f56-4d98-a1df-384973815163","Type":"ContainerStarted","Data":"87832ecbdcba5c1b91eaf711730aa0ff33ffc94ed2611ed9abcde3b0fedd3aba"} Mar 21 04:19:04 crc kubenswrapper[4923]: I0321 04:19:04.852943 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" event={"ID":"95d61f2a-3f56-4d98-a1df-384973815163","Type":"ContainerStarted","Data":"c1ea8207610413fd41a1bf8001be27bde97c42c2b9cd8be8e11f0407a10630f3"} Mar 21 04:19:04 crc kubenswrapper[4923]: I0321 04:19:04.852984 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" event={"ID":"95d61f2a-3f56-4d98-a1df-384973815163","Type":"ContainerStarted","Data":"d920fcdc6374ceb79d6f8ea705e06a52886603b2645de0f3a50220d08af1ccb4"} Mar 21 04:19:04 crc kubenswrapper[4923]: I0321 04:19:04.853004 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" event={"ID":"95d61f2a-3f56-4d98-a1df-384973815163","Type":"ContainerStarted","Data":"6fcdcc1d3637b595c58a13d00649f4fdbe3df085b7de5ff51cca91699ea51f55"} Mar 21 04:19:04 crc kubenswrapper[4923]: I0321 04:19:04.853106 4923 generic.go:334] "Generic (PLEG): container finished" podID="e7e27c82-1749-4ac1-96e5-602ffc171726" containerID="f293a635d53b8e5a1d17a3c4bc4844d33fb32dd4cabb1648ff6b977b1ac9373e" exitCode=0 Mar 21 04:19:04 crc kubenswrapper[4923]: I0321 04:19:04.853440 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8x6p9" event={"ID":"e7e27c82-1749-4ac1-96e5-602ffc171726","Type":"ContainerDied","Data":"f293a635d53b8e5a1d17a3c4bc4844d33fb32dd4cabb1648ff6b977b1ac9373e"} Mar 21 04:19:04 crc kubenswrapper[4923]: I0321 04:19:04.856732 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z8nrn" event={"ID":"427f97c8-8d28-4a22-8e41-6925432fe493","Type":"ContainerStarted","Data":"4ad5f9779bfac0f6ae2ba6a3783e4c05f68e597f63edf092ee353afc0f77e3e7"} Mar 21 04:19:04 crc kubenswrapper[4923]: I0321 04:19:04.856796 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z8nrn" event={"ID":"427f97c8-8d28-4a22-8e41-6925432fe493","Type":"ContainerStarted","Data":"f801d2b1bb41d3f7cd72c5bb27502e8c8f7cd3e23147f1f5a80fca18dd9e40f1"} Mar 21 04:19:04 crc kubenswrapper[4923]: I0321 04:19:04.917642 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-4fx6w" podStartSLOduration=37.917612348 podStartE2EDuration="37.917612348s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:04.914646499 +0000 UTC m=+110.067657586" watchObservedRunningTime="2026-03-21 04:19:04.917612348 +0000 UTC m=+110.070623475" Mar 21 04:19:04 crc kubenswrapper[4923]: I0321 04:19:04.929735 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z8nrn" podStartSLOduration=37.92970936 podStartE2EDuration="37.92970936s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:04.927682719 +0000 UTC m=+110.080693816" watchObservedRunningTime="2026-03-21 04:19:04.92970936 +0000 UTC m=+110.082720457" Mar 21 04:19:05 crc kubenswrapper[4923]: I0321 04:19:05.071936 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:05 crc kubenswrapper[4923]: I0321 04:19:05.072041 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c807a2c9-347b-412f-ae48-0a1d03fefa10-metrics-certs\") pod \"network-metrics-daemon-rxwzv\" (UID: \"c807a2c9-347b-412f-ae48-0a1d03fefa10\") " pod="openshift-multus/network-metrics-daemon-rxwzv" Mar 21 04:19:05 crc kubenswrapper[4923]: I0321 04:19:05.072072 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:19:05 crc kubenswrapper[4923]: I0321 04:19:05.072136 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:19:05 crc kubenswrapper[4923]: E0321 04:19:05.072204 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:21.072185193 +0000 UTC m=+126.225196280 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:05 crc kubenswrapper[4923]: E0321 04:19:05.072295 4923 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 04:19:05 crc kubenswrapper[4923]: E0321 04:19:05.072315 4923 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 04:19:05 crc kubenswrapper[4923]: E0321 04:19:05.072411 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 04:19:21.072388969 +0000 UTC m=+126.225400066 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 04:19:05 crc kubenswrapper[4923]: E0321 04:19:05.072307 4923 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 04:19:05 crc kubenswrapper[4923]: E0321 04:19:05.072484 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c807a2c9-347b-412f-ae48-0a1d03fefa10-metrics-certs podName:c807a2c9-347b-412f-ae48-0a1d03fefa10 nodeName:}" failed. No retries permitted until 2026-03-21 04:19:07.07244604 +0000 UTC m=+112.225457217 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c807a2c9-347b-412f-ae48-0a1d03fefa10-metrics-certs") pod "network-metrics-daemon-rxwzv" (UID: "c807a2c9-347b-412f-ae48-0a1d03fefa10") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 04:19:05 crc kubenswrapper[4923]: E0321 04:19:05.072595 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 04:19:21.072569284 +0000 UTC m=+126.225580491 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 04:19:05 crc kubenswrapper[4923]: I0321 04:19:05.173492 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 04:19:05 crc kubenswrapper[4923]: I0321 04:19:05.173543 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:19:05 crc kubenswrapper[4923]: E0321 04:19:05.173686 4923 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 04:19:05 crc kubenswrapper[4923]: E0321 04:19:05.173706 4923 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 04:19:05 crc kubenswrapper[4923]: E0321 04:19:05.173720 4923 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 04:19:05 crc kubenswrapper[4923]: E0321 04:19:05.173773 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-21 04:19:21.173756792 +0000 UTC m=+126.326767889 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 04:19:05 crc kubenswrapper[4923]: E0321 04:19:05.174281 4923 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 04:19:05 crc kubenswrapper[4923]: E0321 04:19:05.174343 4923 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 04:19:05 crc kubenswrapper[4923]: E0321 04:19:05.174364 4923 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 04:19:05 crc kubenswrapper[4923]: E0321 04:19:05.174464 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-21 04:19:21.174436952 +0000 UTC m=+126.327448039 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 04:19:05 crc kubenswrapper[4923]: I0321 04:19:05.358383 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rxwzv" Mar 21 04:19:05 crc kubenswrapper[4923]: E0321 04:19:05.358610 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rxwzv" podUID="c807a2c9-347b-412f-ae48-0a1d03fefa10" Mar 21 04:19:05 crc kubenswrapper[4923]: I0321 04:19:05.853610 4923 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 21 04:19:05 crc kubenswrapper[4923]: I0321 04:19:05.864497 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8x6p9" event={"ID":"e7e27c82-1749-4ac1-96e5-602ffc171726","Type":"ContainerStarted","Data":"cce5110acdd66f7045064c2f7b728fadb2524f0df1958266e4b1258ac9dc68b8"} Mar 21 04:19:05 crc kubenswrapper[4923]: I0321 04:19:05.870847 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" event={"ID":"95d61f2a-3f56-4d98-a1df-384973815163","Type":"ContainerStarted","Data":"678d94e877125f7c10a246da944ce794e58c872e87b8a5ca503a7b6bc22d9db6"} Mar 21 04:19:05 crc kubenswrapper[4923]: I0321 04:19:05.870925 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" event={"ID":"95d61f2a-3f56-4d98-a1df-384973815163","Type":"ContainerStarted","Data":"5f2b5b675bef5dc6b9f41659e2d5874dfcf3f532b758c9814c29856752570d6e"} Mar 21 04:19:06 crc kubenswrapper[4923]: I0321 04:19:06.357496 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:19:06 crc kubenswrapper[4923]: I0321 04:19:06.357547 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:19:06 crc kubenswrapper[4923]: I0321 04:19:06.357502 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 04:19:06 crc kubenswrapper[4923]: E0321 04:19:06.359053 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 04:19:06 crc kubenswrapper[4923]: E0321 04:19:06.359687 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 04:19:06 crc kubenswrapper[4923]: E0321 04:19:06.359812 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 04:19:06 crc kubenswrapper[4923]: I0321 04:19:06.878343 4923 generic.go:334] "Generic (PLEG): container finished" podID="e7e27c82-1749-4ac1-96e5-602ffc171726" containerID="cce5110acdd66f7045064c2f7b728fadb2524f0df1958266e4b1258ac9dc68b8" exitCode=0 Mar 21 04:19:06 crc kubenswrapper[4923]: I0321 04:19:06.878433 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8x6p9" event={"ID":"e7e27c82-1749-4ac1-96e5-602ffc171726","Type":"ContainerDied","Data":"cce5110acdd66f7045064c2f7b728fadb2524f0df1958266e4b1258ac9dc68b8"} Mar 21 04:19:07 crc kubenswrapper[4923]: I0321 04:19:07.094090 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c807a2c9-347b-412f-ae48-0a1d03fefa10-metrics-certs\") pod \"network-metrics-daemon-rxwzv\" (UID: \"c807a2c9-347b-412f-ae48-0a1d03fefa10\") " pod="openshift-multus/network-metrics-daemon-rxwzv" Mar 21 04:19:07 crc kubenswrapper[4923]: E0321 04:19:07.094395 4923 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 04:19:07 crc kubenswrapper[4923]: E0321 04:19:07.094578 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c807a2c9-347b-412f-ae48-0a1d03fefa10-metrics-certs podName:c807a2c9-347b-412f-ae48-0a1d03fefa10 nodeName:}" failed. No retries permitted until 2026-03-21 04:19:11.09456073 +0000 UTC m=+116.247571827 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c807a2c9-347b-412f-ae48-0a1d03fefa10-metrics-certs") pod "network-metrics-daemon-rxwzv" (UID: "c807a2c9-347b-412f-ae48-0a1d03fefa10") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 04:19:07 crc kubenswrapper[4923]: I0321 04:19:07.358496 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rxwzv" Mar 21 04:19:07 crc kubenswrapper[4923]: E0321 04:19:07.358644 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rxwzv" podUID="c807a2c9-347b-412f-ae48-0a1d03fefa10" Mar 21 04:19:07 crc kubenswrapper[4923]: I0321 04:19:07.884075 4923 generic.go:334] "Generic (PLEG): container finished" podID="e7e27c82-1749-4ac1-96e5-602ffc171726" containerID="334c1ce50faecb2971d115a73ded036788cab9ba2fe0656998c30ca4c9afc2da" exitCode=0 Mar 21 04:19:07 crc kubenswrapper[4923]: I0321 04:19:07.884144 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8x6p9" event={"ID":"e7e27c82-1749-4ac1-96e5-602ffc171726","Type":"ContainerDied","Data":"334c1ce50faecb2971d115a73ded036788cab9ba2fe0656998c30ca4c9afc2da"} Mar 21 04:19:07 crc kubenswrapper[4923]: I0321 04:19:07.892521 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" event={"ID":"95d61f2a-3f56-4d98-a1df-384973815163","Type":"ContainerStarted","Data":"6dac85e489ca5901df3ed56b84db84a1eeea9bc2154ff5a398ce00c5f0edf661"} Mar 21 04:19:08 crc kubenswrapper[4923]: I0321 04:19:08.358612 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:19:08 crc kubenswrapper[4923]: I0321 04:19:08.358645 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 04:19:08 crc kubenswrapper[4923]: E0321 04:19:08.358801 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 04:19:08 crc kubenswrapper[4923]: I0321 04:19:08.358869 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:19:08 crc kubenswrapper[4923]: E0321 04:19:08.359033 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 04:19:08 crc kubenswrapper[4923]: E0321 04:19:08.359150 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 04:19:08 crc kubenswrapper[4923]: I0321 04:19:08.898267 4923 generic.go:334] "Generic (PLEG): container finished" podID="e7e27c82-1749-4ac1-96e5-602ffc171726" containerID="088ae8866f2d31c66f3148d74b2cae0aa879bf7db262847a6d8b79fc88e241c1" exitCode=0 Mar 21 04:19:08 crc kubenswrapper[4923]: I0321 04:19:08.898385 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8x6p9" event={"ID":"e7e27c82-1749-4ac1-96e5-602ffc171726","Type":"ContainerDied","Data":"088ae8866f2d31c66f3148d74b2cae0aa879bf7db262847a6d8b79fc88e241c1"} Mar 21 04:19:09 crc kubenswrapper[4923]: I0321 04:19:09.357489 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rxwzv" Mar 21 04:19:09 crc kubenswrapper[4923]: E0321 04:19:09.357681 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rxwzv" podUID="c807a2c9-347b-412f-ae48-0a1d03fefa10" Mar 21 04:19:09 crc kubenswrapper[4923]: I0321 04:19:09.907037 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8x6p9" event={"ID":"e7e27c82-1749-4ac1-96e5-602ffc171726","Type":"ContainerStarted","Data":"997bb4af681500e6846a0944e079a95833c94bbdff4398bed1cd7e39813671b4"} Mar 21 04:19:09 crc kubenswrapper[4923]: I0321 04:19:09.914266 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" event={"ID":"95d61f2a-3f56-4d98-a1df-384973815163","Type":"ContainerStarted","Data":"1eccbe980bac77244008c06eac3d1bad7762baa0fc1947c6df9f95fce761f647"} Mar 21 04:19:10 crc kubenswrapper[4923]: I0321 04:19:10.357977 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:19:10 crc kubenswrapper[4923]: I0321 04:19:10.357977 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:19:10 crc kubenswrapper[4923]: I0321 04:19:10.358012 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 04:19:10 crc kubenswrapper[4923]: E0321 04:19:10.358174 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 04:19:10 crc kubenswrapper[4923]: E0321 04:19:10.358294 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 04:19:10 crc kubenswrapper[4923]: E0321 04:19:10.358400 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 04:19:10 crc kubenswrapper[4923]: I0321 04:19:10.919236 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:10 crc kubenswrapper[4923]: I0321 04:19:10.919846 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:10 crc kubenswrapper[4923]: I0321 04:19:10.963889 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" podStartSLOduration=43.963867936 podStartE2EDuration="43.963867936s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:10.96164958 +0000 UTC m=+116.114660677" watchObservedRunningTime="2026-03-21 04:19:10.963867936 +0000 UTC m=+116.116879033" Mar 21 04:19:10 crc kubenswrapper[4923]: I0321 04:19:10.964567 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:11 crc kubenswrapper[4923]: I0321 04:19:11.136564 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c807a2c9-347b-412f-ae48-0a1d03fefa10-metrics-certs\") pod \"network-metrics-daemon-rxwzv\" (UID: \"c807a2c9-347b-412f-ae48-0a1d03fefa10\") " pod="openshift-multus/network-metrics-daemon-rxwzv" Mar 21 04:19:11 crc kubenswrapper[4923]: E0321 04:19:11.136776 4923 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 04:19:11 crc kubenswrapper[4923]: E0321 04:19:11.136920 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c807a2c9-347b-412f-ae48-0a1d03fefa10-metrics-certs podName:c807a2c9-347b-412f-ae48-0a1d03fefa10 nodeName:}" failed. No retries permitted until 2026-03-21 04:19:19.136881353 +0000 UTC m=+124.289892540 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c807a2c9-347b-412f-ae48-0a1d03fefa10-metrics-certs") pod "network-metrics-daemon-rxwzv" (UID: "c807a2c9-347b-412f-ae48-0a1d03fefa10") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 04:19:11 crc kubenswrapper[4923]: I0321 04:19:11.357750 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rxwzv" Mar 21 04:19:11 crc kubenswrapper[4923]: E0321 04:19:11.357941 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rxwzv" podUID="c807a2c9-347b-412f-ae48-0a1d03fefa10" Mar 21 04:19:11 crc kubenswrapper[4923]: I0321 04:19:11.923382 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:11 crc kubenswrapper[4923]: I0321 04:19:11.961703 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:12 crc kubenswrapper[4923]: I0321 04:19:12.357621 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:19:12 crc kubenswrapper[4923]: I0321 04:19:12.357738 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:19:12 crc kubenswrapper[4923]: E0321 04:19:12.357764 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 04:19:12 crc kubenswrapper[4923]: I0321 04:19:12.357827 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 04:19:12 crc kubenswrapper[4923]: E0321 04:19:12.357990 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 04:19:12 crc kubenswrapper[4923]: E0321 04:19:12.358132 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 04:19:12 crc kubenswrapper[4923]: I0321 04:19:12.929298 4923 generic.go:334] "Generic (PLEG): container finished" podID="e7e27c82-1749-4ac1-96e5-602ffc171726" containerID="997bb4af681500e6846a0944e079a95833c94bbdff4398bed1cd7e39813671b4" exitCode=0 Mar 21 04:19:12 crc kubenswrapper[4923]: I0321 04:19:12.929356 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8x6p9" event={"ID":"e7e27c82-1749-4ac1-96e5-602ffc171726","Type":"ContainerDied","Data":"997bb4af681500e6846a0944e079a95833c94bbdff4398bed1cd7e39813671b4"} Mar 21 04:19:13 crc kubenswrapper[4923]: I0321 04:19:13.357448 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rxwzv" Mar 21 04:19:13 crc kubenswrapper[4923]: E0321 04:19:13.358048 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rxwzv" podUID="c807a2c9-347b-412f-ae48-0a1d03fefa10" Mar 21 04:19:13 crc kubenswrapper[4923]: I0321 04:19:13.936155 4923 generic.go:334] "Generic (PLEG): container finished" podID="e7e27c82-1749-4ac1-96e5-602ffc171726" containerID="657ca1c878978a8af573b1aa4cf8d6e2dee94575459b281c924f59cfec87ac48" exitCode=0 Mar 21 04:19:13 crc kubenswrapper[4923]: I0321 04:19:13.936220 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8x6p9" event={"ID":"e7e27c82-1749-4ac1-96e5-602ffc171726","Type":"ContainerDied","Data":"657ca1c878978a8af573b1aa4cf8d6e2dee94575459b281c924f59cfec87ac48"} Mar 21 04:19:14 crc kubenswrapper[4923]: I0321 04:19:14.020496 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-rxwzv"] Mar 21 04:19:14 crc kubenswrapper[4923]: I0321 04:19:14.020616 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rxwzv" Mar 21 04:19:14 crc kubenswrapper[4923]: E0321 04:19:14.020747 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rxwzv" podUID="c807a2c9-347b-412f-ae48-0a1d03fefa10" Mar 21 04:19:14 crc kubenswrapper[4923]: I0321 04:19:14.357478 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 04:19:14 crc kubenswrapper[4923]: E0321 04:19:14.357665 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 04:19:14 crc kubenswrapper[4923]: I0321 04:19:14.358183 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:19:14 crc kubenswrapper[4923]: E0321 04:19:14.358284 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 04:19:14 crc kubenswrapper[4923]: I0321 04:19:14.358421 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:19:14 crc kubenswrapper[4923]: E0321 04:19:14.358507 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 04:19:14 crc kubenswrapper[4923]: I0321 04:19:14.944707 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8x6p9" event={"ID":"e7e27c82-1749-4ac1-96e5-602ffc171726","Type":"ContainerStarted","Data":"5bdf3d740046aa69f81f30e2829a123fdebf9c80ece5f6c20fd9dbe8d30ea4ed"} Mar 21 04:19:15 crc kubenswrapper[4923]: I0321 04:19:15.010350 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-8x6p9" podStartSLOduration=48.010334133 podStartE2EDuration="48.010334133s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:15.003785697 +0000 UTC m=+120.156796784" watchObservedRunningTime="2026-03-21 04:19:15.010334133 +0000 UTC m=+120.163345220" Mar 21 04:19:16 crc kubenswrapper[4923]: I0321 04:19:16.149248 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:19:16 crc kubenswrapper[4923]: E0321 04:19:16.293951 4923 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 21 04:19:16 crc kubenswrapper[4923]: I0321 04:19:16.357695 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:19:16 crc kubenswrapper[4923]: I0321 04:19:16.357834 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:19:16 crc kubenswrapper[4923]: E0321 04:19:16.359361 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 04:19:16 crc kubenswrapper[4923]: I0321 04:19:16.359513 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rxwzv" Mar 21 04:19:16 crc kubenswrapper[4923]: I0321 04:19:16.359685 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 04:19:16 crc kubenswrapper[4923]: E0321 04:19:16.359807 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rxwzv" podUID="c807a2c9-347b-412f-ae48-0a1d03fefa10" Mar 21 04:19:16 crc kubenswrapper[4923]: E0321 04:19:16.359688 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 04:19:16 crc kubenswrapper[4923]: E0321 04:19:16.360112 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 04:19:16 crc kubenswrapper[4923]: E0321 04:19:16.472556 4923 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 04:19:18 crc kubenswrapper[4923]: I0321 04:19:18.357672 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rxwzv" Mar 21 04:19:18 crc kubenswrapper[4923]: I0321 04:19:18.357733 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:19:18 crc kubenswrapper[4923]: I0321 04:19:18.357794 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:19:18 crc kubenswrapper[4923]: E0321 04:19:18.357862 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rxwzv" podUID="c807a2c9-347b-412f-ae48-0a1d03fefa10" Mar 21 04:19:18 crc kubenswrapper[4923]: E0321 04:19:18.357956 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 04:19:18 crc kubenswrapper[4923]: I0321 04:19:18.358012 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 04:19:18 crc kubenswrapper[4923]: E0321 04:19:18.358128 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 04:19:18 crc kubenswrapper[4923]: E0321 04:19:18.358166 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 04:19:19 crc kubenswrapper[4923]: I0321 04:19:19.137845 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c807a2c9-347b-412f-ae48-0a1d03fefa10-metrics-certs\") pod \"network-metrics-daemon-rxwzv\" (UID: \"c807a2c9-347b-412f-ae48-0a1d03fefa10\") " pod="openshift-multus/network-metrics-daemon-rxwzv" Mar 21 04:19:19 crc kubenswrapper[4923]: E0321 04:19:19.138039 4923 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 04:19:19 crc kubenswrapper[4923]: E0321 04:19:19.138491 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c807a2c9-347b-412f-ae48-0a1d03fefa10-metrics-certs podName:c807a2c9-347b-412f-ae48-0a1d03fefa10 nodeName:}" failed. No retries permitted until 2026-03-21 04:19:35.138461673 +0000 UTC m=+140.291472790 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c807a2c9-347b-412f-ae48-0a1d03fefa10-metrics-certs") pod "network-metrics-daemon-rxwzv" (UID: "c807a2c9-347b-412f-ae48-0a1d03fefa10") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 21 04:19:19 crc kubenswrapper[4923]: I0321 04:19:19.372614 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 21 04:19:20 crc kubenswrapper[4923]: I0321 04:19:20.358071 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:19:20 crc kubenswrapper[4923]: I0321 04:19:20.358392 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rxwzv" Mar 21 04:19:20 crc kubenswrapper[4923]: I0321 04:19:20.358579 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 04:19:20 crc kubenswrapper[4923]: E0321 04:19:20.358570 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 21 04:19:20 crc kubenswrapper[4923]: E0321 04:19:20.358704 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rxwzv" podUID="c807a2c9-347b-412f-ae48-0a1d03fefa10" Mar 21 04:19:20 crc kubenswrapper[4923]: E0321 04:19:20.358842 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 21 04:19:20 crc kubenswrapper[4923]: I0321 04:19:20.359469 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:19:20 crc kubenswrapper[4923]: E0321 04:19:20.359767 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 21 04:19:21 crc kubenswrapper[4923]: I0321 04:19:21.161862 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:21 crc kubenswrapper[4923]: I0321 04:19:21.161997 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:19:21 crc kubenswrapper[4923]: E0321 04:19:21.162013 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:53.161985954 +0000 UTC m=+158.314997061 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:21 crc kubenswrapper[4923]: I0321 04:19:21.162044 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:19:21 crc kubenswrapper[4923]: E0321 04:19:21.162136 4923 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 04:19:21 crc kubenswrapper[4923]: E0321 04:19:21.162188 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 04:19:53.16217491 +0000 UTC m=+158.315186007 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 21 04:19:21 crc kubenswrapper[4923]: E0321 04:19:21.162211 4923 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 04:19:21 crc kubenswrapper[4923]: E0321 04:19:21.162259 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-21 04:19:53.162244702 +0000 UTC m=+158.315255799 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 21 04:19:21 crc kubenswrapper[4923]: I0321 04:19:21.263079 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 04:19:21 crc kubenswrapper[4923]: I0321 04:19:21.263148 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:19:21 crc kubenswrapper[4923]: E0321 04:19:21.263382 4923 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 04:19:21 crc kubenswrapper[4923]: E0321 04:19:21.263395 4923 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 21 04:19:21 crc kubenswrapper[4923]: E0321 04:19:21.263471 4923 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 04:19:21 crc kubenswrapper[4923]: E0321 04:19:21.263494 4923 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 04:19:21 crc kubenswrapper[4923]: E0321 04:19:21.263418 4923 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 21 04:19:21 crc kubenswrapper[4923]: E0321 04:19:21.263579 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-21 04:19:53.263551593 +0000 UTC m=+158.416562720 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 04:19:21 crc kubenswrapper[4923]: E0321 04:19:21.263592 4923 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 04:19:21 crc kubenswrapper[4923]: E0321 04:19:21.263674 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-21 04:19:53.263645556 +0000 UTC m=+158.416656673 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 21 04:19:22 crc kubenswrapper[4923]: I0321 04:19:22.357978 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rxwzv" Mar 21 04:19:22 crc kubenswrapper[4923]: I0321 04:19:22.358039 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 04:19:22 crc kubenswrapper[4923]: I0321 04:19:22.358093 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:19:22 crc kubenswrapper[4923]: I0321 04:19:22.358239 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:19:22 crc kubenswrapper[4923]: I0321 04:19:22.364483 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 21 04:19:22 crc kubenswrapper[4923]: I0321 04:19:22.364912 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 21 04:19:22 crc kubenswrapper[4923]: I0321 04:19:22.364995 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 21 04:19:22 crc kubenswrapper[4923]: I0321 04:19:22.365162 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 21 04:19:22 crc kubenswrapper[4923]: I0321 04:19:22.365213 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 21 04:19:22 crc kubenswrapper[4923]: I0321 04:19:22.366513 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.746431 4923 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.794080 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kkp72"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.794441 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kkp72" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.797698 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.798176 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-p69c7"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.798628 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-p69c7" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.801775 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.802165 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.802267 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.803647 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5bhvd"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.805423 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.805952 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.806399 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.806591 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.806800 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.806836 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.806839 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.807062 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.807449 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.812825 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qb6fp"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.813084 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.813647 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-94ccs"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.813862 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qb6fp" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.814195 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vwt2s"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.814359 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-94ccs" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.814917 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-22lp8"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.815360 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vwt2s" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.815758 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5jpr6"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.815974 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-22lp8" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.816566 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.825809 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bmnqq"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.826356 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.829018 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.831379 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.831702 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.833046 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.833237 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.834219 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.835140 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.835285 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.836837 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.836965 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.837406 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.837419 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.837548 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.837549 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.837666 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.837787 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.837828 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.837958 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.838481 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.839036 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.839057 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.839087 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.839147 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.839171 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.839292 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.839365 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.839426 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.839435 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mnss5"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.839451 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.839694 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mnss5" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.839823 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.840141 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.840210 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.840377 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.840447 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.840604 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.840699 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.840712 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.849456 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.850500 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.852933 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.856543 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.856658 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.856773 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.861314 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.861709 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.861779 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.861845 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.862096 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.862170 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.862358 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.863794 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-qm9sh"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.864356 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qm9sh" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.866256 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.866507 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.866849 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.867455 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.868747 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.869828 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.869952 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.871143 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-q2rkk"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.871840 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q2rkk" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.873259 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.873419 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.874360 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.874505 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wfhqh"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.879456 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.880181 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.880584 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.880747 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.881365 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wfhqh" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.881739 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.883011 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.884747 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.884950 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.885367 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.885604 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.896104 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-tbxjk"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.896737 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-tbxjk" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.902881 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.903104 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.903253 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.903263 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.903349 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.903420 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.903538 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.903590 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.903538 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.903669 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.914627 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.914749 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.914958 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.915445 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.916622 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.918719 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kvhtt"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.929368 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.930913 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.931061 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.931157 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.932601 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.933856 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bs4dw"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.933953 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-kvhtt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.934646 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.939521 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4jzdb"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.939919 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-68fmb"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.940140 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8bxdt"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.941003 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bs4dw" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.941013 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-4jzdb" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.941136 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-68fmb" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.941475 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv8h5"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.941774 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv8h5" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.941937 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8bxdt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.944462 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gdjq"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.944893 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lqw2"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.945182 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxrf"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.945235 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gdjq" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.945345 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lqw2" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.945724 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxrf" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.945951 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rwpdp"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.946438 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rwpdp" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.946618 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k2q79"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.947003 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k2q79" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.947394 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zljcw"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.947706 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-zljcw" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.948817 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j26hr"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.948993 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.949117 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j26hr" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.949166 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.952583 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-dpq9j"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.953229 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-blnd8"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.955275 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dpq9j" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.956699 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.957850 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xw67l"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.958125 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-s5spg"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.958402 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-blnd8" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.958825 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567775-vxpk6"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.959088 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4s8j8"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.959350 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xw67l" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.959401 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4s8j8" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.959465 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5spg" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.959609 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567775-vxpk6" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.961368 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6dbvj"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.961731 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6dbvj" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.961926 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cf5k2"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.962357 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cf5k2" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.965418 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-2hqpc"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.965897 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2hqpc" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.973377 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kkp72"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.973426 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qb6fp"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.980627 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5bhvd"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.983036 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.983817 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-22lp8"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.983843 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vwt2s"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.983952 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.985071 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-94ccs"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.991520 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-kfnbv"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.996265 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5jpr6"] Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.996482 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kfnbv" Mar 21 04:19:24 crc kubenswrapper[4923]: I0321 04:19:24.998264 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-68fmb"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.001489 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-q2rkk"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.007310 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4jzdb"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.007673 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=6.007662603 podStartE2EDuration="6.007662603s" podCreationTimestamp="2026-03-21 04:19:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:25.006684904 +0000 UTC m=+130.159696001" watchObservedRunningTime="2026-03-21 04:19:25.007662603 +0000 UTC m=+130.160673690" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.008156 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8bxdt"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.010366 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bs4dw"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.010397 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k2q79"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.010950 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xw67l"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.017266 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.017446 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rwpdp"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.017471 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-dpq9j"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.017618 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lqw2"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.017742 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.017772 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1921652d-7f9e-4bef-927f-fd616e41f865-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mnss5\" (UID: \"1921652d-7f9e-4bef-927f-fd616e41f865\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mnss5" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.017792 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.017811 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.017842 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-audit-dir\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.017861 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.017879 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld9g7\" (UniqueName: \"kubernetes.io/projected/8372e439-9ab9-4ba3-80e8-c6d3959187f7-kube-api-access-ld9g7\") pod \"machine-approver-56656f9798-qm9sh\" (UID: \"8372e439-9ab9-4ba3-80e8-c6d3959187f7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qm9sh" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.017954 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8n5r\" (UniqueName: \"kubernetes.io/projected/57f89f97-10d8-45d1-a69f-34d09c5224c9-kube-api-access-f8n5r\") pod \"ingress-operator-5b745b69d9-q2rkk\" (UID: \"57f89f97-10d8-45d1-a69f-34d09c5224c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q2rkk" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.017996 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-node-pullsecrets\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.018029 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bba19ac5-eeb6-4536-93c2-22f110e6ce8a-oauth-serving-cert\") pod \"console-f9d7485db-p69c7\" (UID: \"bba19ac5-eeb6-4536-93c2-22f110e6ce8a\") " pod="openshift-console/console-f9d7485db-p69c7" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.018057 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ea29f2c8-3e4b-42b5-8161-af03ae16ed23-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kn7nz\" (UID: \"ea29f2c8-3e4b-42b5-8161-af03ae16ed23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.018089 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bba19ac5-eeb6-4536-93c2-22f110e6ce8a-console-serving-cert\") pod \"console-f9d7485db-p69c7\" (UID: \"bba19ac5-eeb6-4536-93c2-22f110e6ce8a\") " pod="openshift-console/console-f9d7485db-p69c7" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.018111 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.018456 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.018492 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e29bd24-b2c3-434d-bdf8-6f05376fb87a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kkp72\" (UID: \"9e29bd24-b2c3-434d-bdf8-6f05376fb87a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kkp72" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.018522 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.018555 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-bound-sa-token\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.018580 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs2bv\" (UniqueName: \"kubernetes.io/projected/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-kube-api-access-bs2bv\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.018606 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-registry-tls\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.018623 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t77bz\" (UniqueName: \"kubernetes.io/projected/ea29f2c8-3e4b-42b5-8161-af03ae16ed23-kube-api-access-t77bz\") pod \"apiserver-7bbb656c7d-kn7nz\" (UID: \"ea29f2c8-3e4b-42b5-8161-af03ae16ed23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.018644 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8v49\" (UniqueName: \"kubernetes.io/projected/baaa32c9-702b-4a43-a7b7-7a98272f80f3-kube-api-access-p8v49\") pod \"machine-api-operator-5694c8668f-22lp8\" (UID: \"baaa32c9-702b-4a43-a7b7-7a98272f80f3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-22lp8" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.018665 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-image-import-ca\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.018681 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-audit-policies\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.018696 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8372e439-9ab9-4ba3-80e8-c6d3959187f7-machine-approver-tls\") pod \"machine-approver-56656f9798-qm9sh\" (UID: \"8372e439-9ab9-4ba3-80e8-c6d3959187f7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qm9sh" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.018744 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.018793 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm5vj\" (UniqueName: \"kubernetes.io/projected/9e29bd24-b2c3-434d-bdf8-6f05376fb87a-kube-api-access-xm5vj\") pod \"openshift-apiserver-operator-796bbdcf4f-kkp72\" (UID: \"9e29bd24-b2c3-434d-bdf8-6f05376fb87a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kkp72" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.018799 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mnss5"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.018829 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-etcd-client\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019018 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vwt2s\" (UID: \"7d3be6cc-22e0-4a96-9fc2-aed5fe092a53\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vwt2s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019047 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-registry-certificates\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019072 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1921652d-7f9e-4bef-927f-fd616e41f865-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mnss5\" (UID: \"1921652d-7f9e-4bef-927f-fd616e41f865\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mnss5" Mar 21 04:19:25 crc kubenswrapper[4923]: E0321 04:19:25.019084 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:25.519067824 +0000 UTC m=+130.672079021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019113 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-config\") pod \"controller-manager-879f6c89f-vwt2s\" (UID: \"7d3be6cc-22e0-4a96-9fc2-aed5fe092a53\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vwt2s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019140 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019165 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bba19ac5-eeb6-4536-93c2-22f110e6ce8a-service-ca\") pod \"console-f9d7485db-p69c7\" (UID: \"bba19ac5-eeb6-4536-93c2-22f110e6ce8a\") " pod="openshift-console/console-f9d7485db-p69c7" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019187 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea29f2c8-3e4b-42b5-8161-af03ae16ed23-serving-cert\") pod \"apiserver-7bbb656c7d-kn7nz\" (UID: \"ea29f2c8-3e4b-42b5-8161-af03ae16ed23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019208 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57f89f97-10d8-45d1-a69f-34d09c5224c9-trusted-ca\") pod \"ingress-operator-5b745b69d9-q2rkk\" (UID: \"57f89f97-10d8-45d1-a69f-34d09c5224c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q2rkk" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019238 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019293 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-client-ca\") pod \"controller-manager-879f6c89f-vwt2s\" (UID: \"7d3be6cc-22e0-4a96-9fc2-aed5fe092a53\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vwt2s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019355 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/baaa32c9-702b-4a43-a7b7-7a98272f80f3-images\") pod \"machine-api-operator-5694c8668f-22lp8\" (UID: \"baaa32c9-702b-4a43-a7b7-7a98272f80f3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-22lp8" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019384 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-audit\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019405 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-audit-dir\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019443 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bba19ac5-eeb6-4536-93c2-22f110e6ce8a-console-config\") pod \"console-f9d7485db-p69c7\" (UID: \"bba19ac5-eeb6-4536-93c2-22f110e6ce8a\") " pod="openshift-console/console-f9d7485db-p69c7" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019473 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fbfb476-3201-4f1b-b6a1-9b2b858d37d4-config\") pod \"authentication-operator-69f744f599-94ccs\" (UID: \"8fbfb476-3201-4f1b-b6a1-9b2b858d37d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94ccs" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019500 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmx2s\" (UniqueName: \"kubernetes.io/projected/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-kube-api-access-lmx2s\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019522 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/57f89f97-10d8-45d1-a69f-34d09c5224c9-metrics-tls\") pod \"ingress-operator-5b745b69d9-q2rkk\" (UID: \"57f89f97-10d8-45d1-a69f-34d09c5224c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q2rkk" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019543 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8fbfb476-3201-4f1b-b6a1-9b2b858d37d4-serving-cert\") pod \"authentication-operator-69f744f599-94ccs\" (UID: \"8fbfb476-3201-4f1b-b6a1-9b2b858d37d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94ccs" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019584 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea29f2c8-3e4b-42b5-8161-af03ae16ed23-audit-dir\") pod \"apiserver-7bbb656c7d-kn7nz\" (UID: \"ea29f2c8-3e4b-42b5-8161-af03ae16ed23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019607 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fbfb476-3201-4f1b-b6a1-9b2b858d37d4-service-ca-bundle\") pod \"authentication-operator-69f744f599-94ccs\" (UID: \"8fbfb476-3201-4f1b-b6a1-9b2b858d37d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94ccs" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019629 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8372e439-9ab9-4ba3-80e8-c6d3959187f7-config\") pod \"machine-approver-56656f9798-qm9sh\" (UID: \"8372e439-9ab9-4ba3-80e8-c6d3959187f7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qm9sh" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019653 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bba19ac5-eeb6-4536-93c2-22f110e6ce8a-console-oauth-config\") pod \"console-f9d7485db-p69c7\" (UID: \"bba19ac5-eeb6-4536-93c2-22f110e6ce8a\") " pod="openshift-console/console-f9d7485db-p69c7" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019674 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8372e439-9ab9-4ba3-80e8-c6d3959187f7-auth-proxy-config\") pod \"machine-approver-56656f9798-qm9sh\" (UID: \"8372e439-9ab9-4ba3-80e8-c6d3959187f7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qm9sh" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019698 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e29bd24-b2c3-434d-bdf8-6f05376fb87a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kkp72\" (UID: \"9e29bd24-b2c3-434d-bdf8-6f05376fb87a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kkp72" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019718 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea29f2c8-3e4b-42b5-8161-af03ae16ed23-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kn7nz\" (UID: \"ea29f2c8-3e4b-42b5-8161-af03ae16ed23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019740 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swljp\" (UniqueName: \"kubernetes.io/projected/1921652d-7f9e-4bef-927f-fd616e41f865-kube-api-access-swljp\") pod \"openshift-controller-manager-operator-756b6f6bc6-mnss5\" (UID: \"1921652d-7f9e-4bef-927f-fd616e41f865\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mnss5" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019761 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbqrd\" (UniqueName: \"kubernetes.io/projected/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-kube-api-access-kbqrd\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019786 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019809 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvwfl\" (UniqueName: \"kubernetes.io/projected/bba19ac5-eeb6-4536-93c2-22f110e6ce8a-kube-api-access-fvwfl\") pod \"console-f9d7485db-p69c7\" (UID: \"bba19ac5-eeb6-4536-93c2-22f110e6ce8a\") " pod="openshift-console/console-f9d7485db-p69c7" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019825 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-serving-cert\") pod \"controller-manager-879f6c89f-vwt2s\" (UID: \"7d3be6cc-22e0-4a96-9fc2-aed5fe092a53\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vwt2s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019840 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fbfb476-3201-4f1b-b6a1-9b2b858d37d4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-94ccs\" (UID: \"8fbfb476-3201-4f1b-b6a1-9b2b858d37d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94ccs" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019858 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/57f89f97-10d8-45d1-a69f-34d09c5224c9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-q2rkk\" (UID: \"57f89f97-10d8-45d1-a69f-34d09c5224c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q2rkk" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019875 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ea29f2c8-3e4b-42b5-8161-af03ae16ed23-encryption-config\") pod \"apiserver-7bbb656c7d-kn7nz\" (UID: \"ea29f2c8-3e4b-42b5-8161-af03ae16ed23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.021550 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.021872 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bmnqq"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.023341 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv8h5"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.024035 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wfhqh"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.025779 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-9czdb"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.026434 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qgp9s"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.027208 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qgp9s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.019889 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-config\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.029527 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-etcd-serving-ca\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.029557 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-serving-cert\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.029581 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bba19ac5-eeb6-4536-93c2-22f110e6ce8a-trusted-ca-bundle\") pod \"console-f9d7485db-p69c7\" (UID: \"bba19ac5-eeb6-4536-93c2-22f110e6ce8a\") " pod="openshift-console/console-f9d7485db-p69c7" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.029600 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxhst\" (UniqueName: \"kubernetes.io/projected/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-kube-api-access-fxhst\") pod \"controller-manager-879f6c89f-vwt2s\" (UID: \"7d3be6cc-22e0-4a96-9fc2-aed5fe092a53\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vwt2s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.029618 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ea29f2c8-3e4b-42b5-8161-af03ae16ed23-audit-policies\") pod \"apiserver-7bbb656c7d-kn7nz\" (UID: \"ea29f2c8-3e4b-42b5-8161-af03ae16ed23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.029634 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-encryption-config\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.029650 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.029670 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.029685 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.029704 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqnqb\" (UniqueName: \"kubernetes.io/projected/8fbfb476-3201-4f1b-b6a1-9b2b858d37d4-kube-api-access-hqnqb\") pod \"authentication-operator-69f744f599-94ccs\" (UID: \"8fbfb476-3201-4f1b-b6a1-9b2b858d37d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94ccs" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.029724 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-trusted-ca\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.029738 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5944eec5-8a3f-4fde-86d1-792b48c0a2bd-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-wfhqh\" (UID: \"5944eec5-8a3f-4fde-86d1-792b48c0a2bd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wfhqh" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.029753 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baaa32c9-702b-4a43-a7b7-7a98272f80f3-config\") pod \"machine-api-operator-5694c8668f-22lp8\" (UID: \"baaa32c9-702b-4a43-a7b7-7a98272f80f3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-22lp8" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.029770 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/baaa32c9-702b-4a43-a7b7-7a98272f80f3-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-22lp8\" (UID: \"baaa32c9-702b-4a43-a7b7-7a98272f80f3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-22lp8" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.029785 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.029802 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttsdv\" (UniqueName: \"kubernetes.io/projected/5944eec5-8a3f-4fde-86d1-792b48c0a2bd-kube-api-access-ttsdv\") pod \"cluster-samples-operator-665b6dd947-wfhqh\" (UID: \"5944eec5-8a3f-4fde-86d1-792b48c0a2bd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wfhqh" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.029832 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/20a37f55-0799-4e21-88d8-f50c736f01ef-metrics-tls\") pod \"dns-operator-744455d44c-qb6fp\" (UID: \"20a37f55-0799-4e21-88d8-f50c736f01ef\") " pod="openshift-dns-operator/dns-operator-744455d44c-qb6fp" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.029847 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ea29f2c8-3e4b-42b5-8161-af03ae16ed23-etcd-client\") pod \"apiserver-7bbb656c7d-kn7nz\" (UID: \"ea29f2c8-3e4b-42b5-8161-af03ae16ed23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.029864 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4xpk\" (UniqueName: \"kubernetes.io/projected/20a37f55-0799-4e21-88d8-f50c736f01ef-kube-api-access-x4xpk\") pod \"dns-operator-744455d44c-qb6fp\" (UID: \"20a37f55-0799-4e21-88d8-f50c736f01ef\") " pod="openshift-dns-operator/dns-operator-744455d44c-qb6fp" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.030107 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9czdb" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.030977 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.032383 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2hqpc"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.033989 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gdjq"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.035081 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4s8j8"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.036192 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.038843 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-s5spg"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.039793 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567775-vxpk6"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.043082 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zljcw"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.045085 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-blnd8"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.046154 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-p69c7"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.047678 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j26hr"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.048649 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxrf"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.049707 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kvhtt"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.050709 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9czdb"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.051640 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qgp9s"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.054035 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6dbvj"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.054893 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cf5k2"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.056081 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.057720 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-l24r9"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.058374 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-l24r9" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.059373 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-k47v5"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.060339 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-k47v5"] Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.060441 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-k47v5" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.076508 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.097365 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.116686 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.130361 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:25 crc kubenswrapper[4923]: E0321 04:19:25.130590 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:25.63056057 +0000 UTC m=+130.783571667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.130898 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.131038 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-node-pullsecrets\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.131229 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-node-pullsecrets\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.131350 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bba19ac5-eeb6-4536-93c2-22f110e6ce8a-oauth-serving-cert\") pod \"console-f9d7485db-p69c7\" (UID: \"bba19ac5-eeb6-4536-93c2-22f110e6ce8a\") " pod="openshift-console/console-f9d7485db-p69c7" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.131448 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ea29f2c8-3e4b-42b5-8161-af03ae16ed23-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kn7nz\" (UID: \"ea29f2c8-3e4b-42b5-8161-af03ae16ed23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.131582 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc511693-2a38-4ca9-bf24-c7f8b7c47972-trusted-ca\") pod \"console-operator-58897d9998-zljcw\" (UID: \"dc511693-2a38-4ca9-bf24-c7f8b7c47972\") " pod="openshift-console-operator/console-operator-58897d9998-zljcw" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.131786 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8ae3d7d8-8466-4519-91c0-48c7230d1388-csi-data-dir\") pod \"csi-hostpathplugin-qgp9s\" (UID: \"8ae3d7d8-8466-4519-91c0-48c7230d1388\") " pod="hostpath-provisioner/csi-hostpathplugin-qgp9s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.131916 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.132024 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8v49\" (UniqueName: \"kubernetes.io/projected/baaa32c9-702b-4a43-a7b7-7a98272f80f3-kube-api-access-p8v49\") pod \"machine-api-operator-5694c8668f-22lp8\" (UID: \"baaa32c9-702b-4a43-a7b7-7a98272f80f3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-22lp8" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.132115 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-image-import-ca\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.132207 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-audit-policies\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.132313 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm5vj\" (UniqueName: \"kubernetes.io/projected/9e29bd24-b2c3-434d-bdf8-6f05376fb87a-kube-api-access-xm5vj\") pod \"openshift-apiserver-operator-796bbdcf4f-kkp72\" (UID: \"9e29bd24-b2c3-434d-bdf8-6f05376fb87a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kkp72" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.132417 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ea29f2c8-3e4b-42b5-8161-af03ae16ed23-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kn7nz\" (UID: \"ea29f2c8-3e4b-42b5-8161-af03ae16ed23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.132513 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.132585 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-etcd-client\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.132672 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxsvz\" (UniqueName: \"kubernetes.io/projected/e2c285ca-61ae-40f2-b349-7028faa03a00-kube-api-access-zxsvz\") pod \"cluster-image-registry-operator-dc59b4c8b-8gdjq\" (UID: \"e2c285ca-61ae-40f2-b349-7028faa03a00\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gdjq" Mar 21 04:19:25 crc kubenswrapper[4923]: E0321 04:19:25.132741 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:25.632728505 +0000 UTC m=+130.785739592 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.132770 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09b6d403-50e8-4b92-aa1f-173ace636bca-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-68fmb\" (UID: \"09b6d403-50e8-4b92-aa1f-173ace636bca\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-68fmb" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.132799 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1921652d-7f9e-4bef-927f-fd616e41f865-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mnss5\" (UID: \"1921652d-7f9e-4bef-927f-fd616e41f865\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mnss5" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.132815 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.132834 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw66s\" (UniqueName: \"kubernetes.io/projected/dc511693-2a38-4ca9-bf24-c7f8b7c47972-kube-api-access-sw66s\") pod \"console-operator-58897d9998-zljcw\" (UID: \"dc511693-2a38-4ca9-bf24-c7f8b7c47972\") " pod="openshift-console-operator/console-operator-58897d9998-zljcw" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.132851 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57f89f97-10d8-45d1-a69f-34d09c5224c9-trusted-ca\") pod \"ingress-operator-5b745b69d9-q2rkk\" (UID: \"57f89f97-10d8-45d1-a69f-34d09c5224c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q2rkk" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.132870 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmbql\" (UniqueName: \"kubernetes.io/projected/8ae3d7d8-8466-4519-91c0-48c7230d1388-kube-api-access-hmbql\") pod \"csi-hostpathplugin-qgp9s\" (UID: \"8ae3d7d8-8466-4519-91c0-48c7230d1388\") " pod="hostpath-provisioner/csi-hostpathplugin-qgp9s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.132889 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.132906 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-client-ca\") pod \"controller-manager-879f6c89f-vwt2s\" (UID: \"7d3be6cc-22e0-4a96-9fc2-aed5fe092a53\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vwt2s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.132922 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cf54c874-99d3-4bff-bef0-992bcee74002-proxy-tls\") pod \"machine-config-operator-74547568cd-s5spg\" (UID: \"cf54c874-99d3-4bff-bef0-992bcee74002\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5spg" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.132940 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bba19ac5-eeb6-4536-93c2-22f110e6ce8a-console-config\") pod \"console-f9d7485db-p69c7\" (UID: \"bba19ac5-eeb6-4536-93c2-22f110e6ce8a\") " pod="openshift-console/console-f9d7485db-p69c7" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.132939 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-image-import-ca\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.132958 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/eaffcaf5-95c2-451c-af81-3472b875d910-etcd-ca\") pod \"etcd-operator-b45778765-kvhtt\" (UID: \"eaffcaf5-95c2-451c-af81-3472b875d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kvhtt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.132976 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/eaffcaf5-95c2-451c-af81-3472b875d910-etcd-client\") pod \"etcd-operator-b45778765-kvhtt\" (UID: \"eaffcaf5-95c2-451c-af81-3472b875d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kvhtt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.132994 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fbfb476-3201-4f1b-b6a1-9b2b858d37d4-config\") pod \"authentication-operator-69f744f599-94ccs\" (UID: \"8fbfb476-3201-4f1b-b6a1-9b2b858d37d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94ccs" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133010 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b717fc18-bfdb-4e99-8fc0-ea7c905dd908-secret-volume\") pod \"collect-profiles-29567775-vxpk6\" (UID: \"b717fc18-bfdb-4e99-8fc0-ea7c905dd908\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567775-vxpk6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133027 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvks9\" (UniqueName: \"kubernetes.io/projected/a338651b-7efe-4160-84a4-30471aadc1b7-kube-api-access-tvks9\") pod \"packageserver-d55dfcdfc-j26hr\" (UID: \"a338651b-7efe-4160-84a4-30471aadc1b7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j26hr" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133042 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ea7292-1b26-4dc3-a5ba-d9b799c22264-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4lqw2\" (UID: \"93ea7292-1b26-4dc3-a5ba-d9b799c22264\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lqw2" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133061 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmx2s\" (UniqueName: \"kubernetes.io/projected/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-kube-api-access-lmx2s\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133086 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/57f89f97-10d8-45d1-a69f-34d09c5224c9-metrics-tls\") pod \"ingress-operator-5b745b69d9-q2rkk\" (UID: \"57f89f97-10d8-45d1-a69f-34d09c5224c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q2rkk" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133110 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea29f2c8-3e4b-42b5-8161-af03ae16ed23-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kn7nz\" (UID: \"ea29f2c8-3e4b-42b5-8161-af03ae16ed23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133127 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbqrd\" (UniqueName: \"kubernetes.io/projected/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-kube-api-access-kbqrd\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133144 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-serving-cert\") pod \"controller-manager-879f6c89f-vwt2s\" (UID: \"7d3be6cc-22e0-4a96-9fc2-aed5fe092a53\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vwt2s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133160 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfgwh\" (UniqueName: \"kubernetes.io/projected/1877b8e8-87db-477a-a619-be2fbdc97d89-kube-api-access-rfgwh\") pod \"machine-config-controller-84d6567774-dpq9j\" (UID: \"1877b8e8-87db-477a-a619-be2fbdc97d89\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dpq9j" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133175 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8ae3d7d8-8466-4519-91c0-48c7230d1388-mountpoint-dir\") pod \"csi-hostpathplugin-qgp9s\" (UID: \"8ae3d7d8-8466-4519-91c0-48c7230d1388\") " pod="hostpath-provisioner/csi-hostpathplugin-qgp9s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133192 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ea29f2c8-3e4b-42b5-8161-af03ae16ed23-encryption-config\") pod \"apiserver-7bbb656c7d-kn7nz\" (UID: \"ea29f2c8-3e4b-42b5-8161-af03ae16ed23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133214 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-config\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133230 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-serving-cert\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133249 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bba19ac5-eeb6-4536-93c2-22f110e6ce8a-trusted-ca-bundle\") pod \"console-f9d7485db-p69c7\" (UID: \"bba19ac5-eeb6-4536-93c2-22f110e6ce8a\") " pod="openshift-console/console-f9d7485db-p69c7" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133264 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxhst\" (UniqueName: \"kubernetes.io/projected/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-kube-api-access-fxhst\") pod \"controller-manager-879f6c89f-vwt2s\" (UID: \"7d3be6cc-22e0-4a96-9fc2-aed5fe092a53\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vwt2s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133280 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc511693-2a38-4ca9-bf24-c7f8b7c47972-serving-cert\") pod \"console-operator-58897d9998-zljcw\" (UID: \"dc511693-2a38-4ca9-bf24-c7f8b7c47972\") " pod="openshift-console-operator/console-operator-58897d9998-zljcw" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133295 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ea29f2c8-3e4b-42b5-8161-af03ae16ed23-audit-policies\") pod \"apiserver-7bbb656c7d-kn7nz\" (UID: \"ea29f2c8-3e4b-42b5-8161-af03ae16ed23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133309 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-encryption-config\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133346 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133371 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/972f1013-d1cb-44f2-b79d-baacfef4e939-srv-cert\") pod \"olm-operator-6b444d44fb-qv8h5\" (UID: \"972f1013-d1cb-44f2-b79d-baacfef4e939\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv8h5" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133392 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133408 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cf54c874-99d3-4bff-bef0-992bcee74002-auth-proxy-config\") pod \"machine-config-operator-74547568cd-s5spg\" (UID: \"cf54c874-99d3-4bff-bef0-992bcee74002\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5spg" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133422 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a338651b-7efe-4160-84a4-30471aadc1b7-apiservice-cert\") pod \"packageserver-d55dfcdfc-j26hr\" (UID: \"a338651b-7efe-4160-84a4-30471aadc1b7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j26hr" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133438 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1877b8e8-87db-477a-a619-be2fbdc97d89-proxy-tls\") pod \"machine-config-controller-84d6567774-dpq9j\" (UID: \"1877b8e8-87db-477a-a619-be2fbdc97d89\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dpq9j" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133451 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8ae3d7d8-8466-4519-91c0-48c7230d1388-socket-dir\") pod \"csi-hostpathplugin-qgp9s\" (UID: \"8ae3d7d8-8466-4519-91c0-48c7230d1388\") " pod="hostpath-provisioner/csi-hostpathplugin-qgp9s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133465 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8ae3d7d8-8466-4519-91c0-48c7230d1388-registration-dir\") pod \"csi-hostpathplugin-qgp9s\" (UID: \"8ae3d7d8-8466-4519-91c0-48c7230d1388\") " pod="hostpath-provisioner/csi-hostpathplugin-qgp9s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133480 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-trusted-ca\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133497 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5944eec5-8a3f-4fde-86d1-792b48c0a2bd-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-wfhqh\" (UID: \"5944eec5-8a3f-4fde-86d1-792b48c0a2bd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wfhqh" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133512 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133529 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp9ql\" (UniqueName: \"kubernetes.io/projected/3f1486ed-5a65-43a1-9a45-c02318d4d831-kube-api-access-lp9ql\") pod \"downloads-7954f5f757-2hqpc\" (UID: \"3f1486ed-5a65-43a1-9a45-c02318d4d831\") " pod="openshift-console/downloads-7954f5f757-2hqpc" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133544 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09b6d403-50e8-4b92-aa1f-173ace636bca-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-68fmb\" (UID: \"09b6d403-50e8-4b92-aa1f-173ace636bca\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-68fmb" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133564 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8ae3d7d8-8466-4519-91c0-48c7230d1388-plugins-dir\") pod \"csi-hostpathplugin-qgp9s\" (UID: \"8ae3d7d8-8466-4519-91c0-48c7230d1388\") " pod="hostpath-provisioner/csi-hostpathplugin-qgp9s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133582 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/20a37f55-0799-4e21-88d8-f50c736f01ef-metrics-tls\") pod \"dns-operator-744455d44c-qb6fp\" (UID: \"20a37f55-0799-4e21-88d8-f50c736f01ef\") " pod="openshift-dns-operator/dns-operator-744455d44c-qb6fp" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133597 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ea29f2c8-3e4b-42b5-8161-af03ae16ed23-etcd-client\") pod \"apiserver-7bbb656c7d-kn7nz\" (UID: \"ea29f2c8-3e4b-42b5-8161-af03ae16ed23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133612 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6df34760-baa5-45b8-859e-c9935c6f5656-serving-cert\") pod \"openshift-config-operator-7777fb866f-blnd8\" (UID: \"6df34760-baa5-45b8-859e-c9935c6f5656\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-blnd8" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133639 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2c285ca-61ae-40f2-b349-7028faa03a00-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8gdjq\" (UID: \"e2c285ca-61ae-40f2-b349-7028faa03a00\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gdjq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133660 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1921652d-7f9e-4bef-927f-fd616e41f865-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mnss5\" (UID: \"1921652d-7f9e-4bef-927f-fd616e41f865\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mnss5" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133678 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133697 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133713 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-audit-dir\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133729 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld9g7\" (UniqueName: \"kubernetes.io/projected/8372e439-9ab9-4ba3-80e8-c6d3959187f7-kube-api-access-ld9g7\") pod \"machine-approver-56656f9798-qm9sh\" (UID: \"8372e439-9ab9-4ba3-80e8-c6d3959187f7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qm9sh" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133744 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6df34760-baa5-45b8-859e-c9935c6f5656-available-featuregates\") pod \"openshift-config-operator-7777fb866f-blnd8\" (UID: \"6df34760-baa5-45b8-859e-c9935c6f5656\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-blnd8" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133763 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8n5r\" (UniqueName: \"kubernetes.io/projected/57f89f97-10d8-45d1-a69f-34d09c5224c9-kube-api-access-f8n5r\") pod \"ingress-operator-5b745b69d9-q2rkk\" (UID: \"57f89f97-10d8-45d1-a69f-34d09c5224c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q2rkk" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133780 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaffcaf5-95c2-451c-af81-3472b875d910-config\") pod \"etcd-operator-b45778765-kvhtt\" (UID: \"eaffcaf5-95c2-451c-af81-3472b875d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kvhtt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.132075 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bba19ac5-eeb6-4536-93c2-22f110e6ce8a-oauth-serving-cert\") pod \"console-f9d7485db-p69c7\" (UID: \"bba19ac5-eeb6-4536-93c2-22f110e6ce8a\") " pod="openshift-console/console-f9d7485db-p69c7" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133804 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/972f1013-d1cb-44f2-b79d-baacfef4e939-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qv8h5\" (UID: \"972f1013-d1cb-44f2-b79d-baacfef4e939\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv8h5" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.132155 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133830 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b56d396c-a7a2-466c-8c2a-973f24a9e3f1-profile-collector-cert\") pod \"catalog-operator-68c6474976-k2q79\" (UID: \"b56d396c-a7a2-466c-8c2a-973f24a9e3f1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k2q79" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133855 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bba19ac5-eeb6-4536-93c2-22f110e6ce8a-console-serving-cert\") pod \"console-f9d7485db-p69c7\" (UID: \"bba19ac5-eeb6-4536-93c2-22f110e6ce8a\") " pod="openshift-console/console-f9d7485db-p69c7" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133879 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133903 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks6np\" (UniqueName: \"kubernetes.io/projected/cf54c874-99d3-4bff-bef0-992bcee74002-kube-api-access-ks6np\") pod \"machine-config-operator-74547568cd-s5spg\" (UID: \"cf54c874-99d3-4bff-bef0-992bcee74002\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5spg" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133926 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e29bd24-b2c3-434d-bdf8-6f05376fb87a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kkp72\" (UID: \"9e29bd24-b2c3-434d-bdf8-6f05376fb87a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kkp72" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133948 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133971 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-bound-sa-token\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.133994 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs2bv\" (UniqueName: \"kubernetes.io/projected/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-kube-api-access-bs2bv\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134017 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-registry-tls\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134039 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t77bz\" (UniqueName: \"kubernetes.io/projected/ea29f2c8-3e4b-42b5-8161-af03ae16ed23-kube-api-access-t77bz\") pod \"apiserver-7bbb656c7d-kn7nz\" (UID: \"ea29f2c8-3e4b-42b5-8161-af03ae16ed23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134060 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8372e439-9ab9-4ba3-80e8-c6d3959187f7-machine-approver-tls\") pod \"machine-approver-56656f9798-qm9sh\" (UID: \"8372e439-9ab9-4ba3-80e8-c6d3959187f7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qm9sh" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134086 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vwt2s\" (UID: \"7d3be6cc-22e0-4a96-9fc2-aed5fe092a53\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vwt2s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134109 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-registry-certificates\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134132 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgtj6\" (UniqueName: \"kubernetes.io/projected/39dc2e68-1df7-426f-aa50-15a542f6995b-kube-api-access-cgtj6\") pod \"marketplace-operator-79b997595-8bxdt\" (UID: \"39dc2e68-1df7-426f-aa50-15a542f6995b\") " pod="openshift-marketplace/marketplace-operator-79b997595-8bxdt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134155 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaffcaf5-95c2-451c-af81-3472b875d910-serving-cert\") pod \"etcd-operator-b45778765-kvhtt\" (UID: \"eaffcaf5-95c2-451c-af81-3472b875d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kvhtt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134179 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-config\") pod \"controller-manager-879f6c89f-vwt2s\" (UID: \"7d3be6cc-22e0-4a96-9fc2-aed5fe092a53\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vwt2s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134200 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxkdf\" (UniqueName: \"kubernetes.io/projected/eaffcaf5-95c2-451c-af81-3472b875d910-kube-api-access-vxkdf\") pod \"etcd-operator-b45778765-kvhtt\" (UID: \"eaffcaf5-95c2-451c-af81-3472b875d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kvhtt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134222 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bba19ac5-eeb6-4536-93c2-22f110e6ce8a-service-ca\") pod \"console-f9d7485db-p69c7\" (UID: \"bba19ac5-eeb6-4536-93c2-22f110e6ce8a\") " pod="openshift-console/console-f9d7485db-p69c7" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134242 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea29f2c8-3e4b-42b5-8161-af03ae16ed23-serving-cert\") pod \"apiserver-7bbb656c7d-kn7nz\" (UID: \"ea29f2c8-3e4b-42b5-8161-af03ae16ed23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134267 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bjgv\" (UniqueName: \"kubernetes.io/projected/972f1013-d1cb-44f2-b79d-baacfef4e939-kube-api-access-4bjgv\") pod \"olm-operator-6b444d44fb-qv8h5\" (UID: \"972f1013-d1cb-44f2-b79d-baacfef4e939\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv8h5" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134291 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/baaa32c9-702b-4a43-a7b7-7a98272f80f3-images\") pod \"machine-api-operator-5694c8668f-22lp8\" (UID: \"baaa32c9-702b-4a43-a7b7-7a98272f80f3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-22lp8" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134367 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-audit\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134390 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-audit-dir\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134441 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7b214e59-6ee8-4ac4-8753-76ab84691b78-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6dbvj\" (UID: \"7b214e59-6ee8-4ac4-8753-76ab84691b78\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6dbvj" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134478 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1877b8e8-87db-477a-a619-be2fbdc97d89-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-dpq9j\" (UID: \"1877b8e8-87db-477a-a619-be2fbdc97d89\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dpq9j" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134519 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd84h\" (UniqueName: \"kubernetes.io/projected/93ea7292-1b26-4dc3-a5ba-d9b799c22264-kube-api-access-nd84h\") pod \"kube-storage-version-migrator-operator-b67b599dd-4lqw2\" (UID: \"93ea7292-1b26-4dc3-a5ba-d9b799c22264\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lqw2" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134535 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6frhk\" (UniqueName: \"kubernetes.io/projected/b56d396c-a7a2-466c-8c2a-973f24a9e3f1-kube-api-access-6frhk\") pod \"catalog-operator-68c6474976-k2q79\" (UID: \"b56d396c-a7a2-466c-8c2a-973f24a9e3f1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k2q79" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134552 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93ea7292-1b26-4dc3-a5ba-d9b799c22264-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4lqw2\" (UID: \"93ea7292-1b26-4dc3-a5ba-d9b799c22264\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lqw2" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134587 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b56d396c-a7a2-466c-8c2a-973f24a9e3f1-srv-cert\") pod \"catalog-operator-68c6474976-k2q79\" (UID: \"b56d396c-a7a2-466c-8c2a-973f24a9e3f1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k2q79" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134607 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8fbfb476-3201-4f1b-b6a1-9b2b858d37d4-serving-cert\") pod \"authentication-operator-69f744f599-94ccs\" (UID: \"8fbfb476-3201-4f1b-b6a1-9b2b858d37d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94ccs" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134634 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea29f2c8-3e4b-42b5-8161-af03ae16ed23-audit-dir\") pod \"apiserver-7bbb656c7d-kn7nz\" (UID: \"ea29f2c8-3e4b-42b5-8161-af03ae16ed23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134670 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fbfb476-3201-4f1b-b6a1-9b2b858d37d4-service-ca-bundle\") pod \"authentication-operator-69f744f599-94ccs\" (UID: \"8fbfb476-3201-4f1b-b6a1-9b2b858d37d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94ccs" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134688 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8372e439-9ab9-4ba3-80e8-c6d3959187f7-config\") pod \"machine-approver-56656f9798-qm9sh\" (UID: \"8372e439-9ab9-4ba3-80e8-c6d3959187f7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qm9sh" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134704 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwg8p\" (UniqueName: \"kubernetes.io/projected/b717fc18-bfdb-4e99-8fc0-ea7c905dd908-kube-api-access-zwg8p\") pod \"collect-profiles-29567775-vxpk6\" (UID: \"b717fc18-bfdb-4e99-8fc0-ea7c905dd908\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567775-vxpk6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134719 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc511693-2a38-4ca9-bf24-c7f8b7c47972-config\") pod \"console-operator-58897d9998-zljcw\" (UID: \"dc511693-2a38-4ca9-bf24-c7f8b7c47972\") " pod="openshift-console-operator/console-operator-58897d9998-zljcw" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134757 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bba19ac5-eeb6-4536-93c2-22f110e6ce8a-console-oauth-config\") pod \"console-f9d7485db-p69c7\" (UID: \"bba19ac5-eeb6-4536-93c2-22f110e6ce8a\") " pod="openshift-console/console-f9d7485db-p69c7" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134774 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8372e439-9ab9-4ba3-80e8-c6d3959187f7-auth-proxy-config\") pod \"machine-approver-56656f9798-qm9sh\" (UID: \"8372e439-9ab9-4ba3-80e8-c6d3959187f7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qm9sh" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134789 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b717fc18-bfdb-4e99-8fc0-ea7c905dd908-config-volume\") pod \"collect-profiles-29567775-vxpk6\" (UID: \"b717fc18-bfdb-4e99-8fc0-ea7c905dd908\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567775-vxpk6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134827 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3764589a-2735-4154-bda2-c4e462e62202-config\") pod \"kube-controller-manager-operator-78b949d7b-vsxrf\" (UID: \"3764589a-2735-4154-bda2-c4e462e62202\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxrf" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134847 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e29bd24-b2c3-434d-bdf8-6f05376fb87a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kkp72\" (UID: \"9e29bd24-b2c3-434d-bdf8-6f05376fb87a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kkp72" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134865 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swljp\" (UniqueName: \"kubernetes.io/projected/1921652d-7f9e-4bef-927f-fd616e41f865-kube-api-access-swljp\") pod \"openshift-controller-manager-operator-756b6f6bc6-mnss5\" (UID: \"1921652d-7f9e-4bef-927f-fd616e41f865\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mnss5" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134909 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134936 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a338651b-7efe-4160-84a4-30471aadc1b7-webhook-cert\") pod \"packageserver-d55dfcdfc-j26hr\" (UID: \"a338651b-7efe-4160-84a4-30471aadc1b7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j26hr" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.134959 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvwfl\" (UniqueName: \"kubernetes.io/projected/bba19ac5-eeb6-4536-93c2-22f110e6ce8a-kube-api-access-fvwfl\") pod \"console-f9d7485db-p69c7\" (UID: \"bba19ac5-eeb6-4536-93c2-22f110e6ce8a\") " pod="openshift-console/console-f9d7485db-p69c7" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.135029 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fbfb476-3201-4f1b-b6a1-9b2b858d37d4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-94ccs\" (UID: \"8fbfb476-3201-4f1b-b6a1-9b2b858d37d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94ccs" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.135052 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cf54c874-99d3-4bff-bef0-992bcee74002-images\") pod \"machine-config-operator-74547568cd-s5spg\" (UID: \"cf54c874-99d3-4bff-bef0-992bcee74002\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5spg" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.135103 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3764589a-2735-4154-bda2-c4e462e62202-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vsxrf\" (UID: \"3764589a-2735-4154-bda2-c4e462e62202\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxrf" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.135136 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09b6d403-50e8-4b92-aa1f-173ace636bca-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-68fmb\" (UID: \"09b6d403-50e8-4b92-aa1f-173ace636bca\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-68fmb" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.135184 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/57f89f97-10d8-45d1-a69f-34d09c5224c9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-q2rkk\" (UID: \"57f89f97-10d8-45d1-a69f-34d09c5224c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q2rkk" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.135215 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-etcd-serving-ca\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.135238 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39dc2e68-1df7-426f-aa50-15a542f6995b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8bxdt\" (UID: \"39dc2e68-1df7-426f-aa50-15a542f6995b\") " pod="openshift-marketplace/marketplace-operator-79b997595-8bxdt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.135289 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e2c285ca-61ae-40f2-b349-7028faa03a00-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8gdjq\" (UID: \"e2c285ca-61ae-40f2-b349-7028faa03a00\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gdjq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.135313 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.135358 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqnqb\" (UniqueName: \"kubernetes.io/projected/8fbfb476-3201-4f1b-b6a1-9b2b858d37d4-kube-api-access-hqnqb\") pod \"authentication-operator-69f744f599-94ccs\" (UID: \"8fbfb476-3201-4f1b-b6a1-9b2b858d37d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94ccs" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.135383 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e2c285ca-61ae-40f2-b349-7028faa03a00-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8gdjq\" (UID: \"e2c285ca-61ae-40f2-b349-7028faa03a00\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gdjq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.135405 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3764589a-2735-4154-bda2-c4e462e62202-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vsxrf\" (UID: \"3764589a-2735-4154-bda2-c4e462e62202\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxrf" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.135429 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b214e59-6ee8-4ac4-8753-76ab84691b78-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6dbvj\" (UID: \"7b214e59-6ee8-4ac4-8753-76ab84691b78\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6dbvj" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.135451 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/39dc2e68-1df7-426f-aa50-15a542f6995b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8bxdt\" (UID: \"39dc2e68-1df7-426f-aa50-15a542f6995b\") " pod="openshift-marketplace/marketplace-operator-79b997595-8bxdt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.135479 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baaa32c9-702b-4a43-a7b7-7a98272f80f3-config\") pod \"machine-api-operator-5694c8668f-22lp8\" (UID: \"baaa32c9-702b-4a43-a7b7-7a98272f80f3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-22lp8" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.135506 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/baaa32c9-702b-4a43-a7b7-7a98272f80f3-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-22lp8\" (UID: \"baaa32c9-702b-4a43-a7b7-7a98272f80f3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-22lp8" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.135530 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttsdv\" (UniqueName: \"kubernetes.io/projected/5944eec5-8a3f-4fde-86d1-792b48c0a2bd-kube-api-access-ttsdv\") pod \"cluster-samples-operator-665b6dd947-wfhqh\" (UID: \"5944eec5-8a3f-4fde-86d1-792b48c0a2bd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wfhqh" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.135552 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a338651b-7efe-4160-84a4-30471aadc1b7-tmpfs\") pod \"packageserver-d55dfcdfc-j26hr\" (UID: \"a338651b-7efe-4160-84a4-30471aadc1b7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j26hr" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.135586 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b214e59-6ee8-4ac4-8753-76ab84691b78-config\") pod \"kube-apiserver-operator-766d6c64bb-6dbvj\" (UID: \"7b214e59-6ee8-4ac4-8753-76ab84691b78\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6dbvj" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.135608 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4xpk\" (UniqueName: \"kubernetes.io/projected/20a37f55-0799-4e21-88d8-f50c736f01ef-kube-api-access-x4xpk\") pod \"dns-operator-744455d44c-qb6fp\" (UID: \"20a37f55-0799-4e21-88d8-f50c736f01ef\") " pod="openshift-dns-operator/dns-operator-744455d44c-qb6fp" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.135645 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.135671 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw6nk\" (UniqueName: \"kubernetes.io/projected/6df34760-baa5-45b8-859e-c9935c6f5656-kube-api-access-vw6nk\") pod \"openshift-config-operator-7777fb866f-blnd8\" (UID: \"6df34760-baa5-45b8-859e-c9935c6f5656\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-blnd8" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.135692 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/eaffcaf5-95c2-451c-af81-3472b875d910-etcd-service-ca\") pod \"etcd-operator-b45778765-kvhtt\" (UID: \"eaffcaf5-95c2-451c-af81-3472b875d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kvhtt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.136191 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.136476 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bba19ac5-eeb6-4536-93c2-22f110e6ce8a-trusted-ca-bundle\") pod \"console-f9d7485db-p69c7\" (UID: \"bba19ac5-eeb6-4536-93c2-22f110e6ce8a\") " pod="openshift-console/console-f9d7485db-p69c7" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.137057 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-config\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.137786 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/57f89f97-10d8-45d1-a69f-34d09c5224c9-metrics-tls\") pod \"ingress-operator-5b745b69d9-q2rkk\" (UID: \"57f89f97-10d8-45d1-a69f-34d09c5224c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q2rkk" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.137950 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-encryption-config\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.138042 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-etcd-client\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.138404 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1921652d-7f9e-4bef-927f-fd616e41f865-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mnss5\" (UID: \"1921652d-7f9e-4bef-927f-fd616e41f865\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mnss5" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.138493 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-client-ca\") pod \"controller-manager-879f6c89f-vwt2s\" (UID: \"7d3be6cc-22e0-4a96-9fc2-aed5fe092a53\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vwt2s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.138548 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ea29f2c8-3e4b-42b5-8161-af03ae16ed23-audit-policies\") pod \"apiserver-7bbb656c7d-kn7nz\" (UID: \"ea29f2c8-3e4b-42b5-8161-af03ae16ed23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.138972 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.138982 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fbfb476-3201-4f1b-b6a1-9b2b858d37d4-config\") pod \"authentication-operator-69f744f599-94ccs\" (UID: \"8fbfb476-3201-4f1b-b6a1-9b2b858d37d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94ccs" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.140127 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bba19ac5-eeb6-4536-93c2-22f110e6ce8a-console-config\") pod \"console-f9d7485db-p69c7\" (UID: \"bba19ac5-eeb6-4536-93c2-22f110e6ce8a\") " pod="openshift-console/console-f9d7485db-p69c7" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.140799 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57f89f97-10d8-45d1-a69f-34d09c5224c9-trusted-ca\") pod \"ingress-operator-5b745b69d9-q2rkk\" (UID: \"57f89f97-10d8-45d1-a69f-34d09c5224c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q2rkk" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.140977 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea29f2c8-3e4b-42b5-8161-af03ae16ed23-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kn7nz\" (UID: \"ea29f2c8-3e4b-42b5-8161-af03ae16ed23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.141236 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.141605 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-trusted-ca\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.141994 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.132937 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-audit-policies\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.145333 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-audit-dir\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.145794 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.146693 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.146880 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea29f2c8-3e4b-42b5-8161-af03ae16ed23-audit-dir\") pod \"apiserver-7bbb656c7d-kn7nz\" (UID: \"ea29f2c8-3e4b-42b5-8161-af03ae16ed23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.147946 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.148165 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.148254 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8372e439-9ab9-4ba3-80e8-c6d3959187f7-config\") pod \"machine-approver-56656f9798-qm9sh\" (UID: \"8372e439-9ab9-4ba3-80e8-c6d3959187f7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qm9sh" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.148444 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fbfb476-3201-4f1b-b6a1-9b2b858d37d4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-94ccs\" (UID: \"8fbfb476-3201-4f1b-b6a1-9b2b858d37d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94ccs" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.148590 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-etcd-serving-ca\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.148826 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fbfb476-3201-4f1b-b6a1-9b2b858d37d4-service-ca-bundle\") pod \"authentication-operator-69f744f599-94ccs\" (UID: \"8fbfb476-3201-4f1b-b6a1-9b2b858d37d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94ccs" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.141597 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.148940 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea29f2c8-3e4b-42b5-8161-af03ae16ed23-serving-cert\") pod \"apiserver-7bbb656c7d-kn7nz\" (UID: \"ea29f2c8-3e4b-42b5-8161-af03ae16ed23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.148980 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-audit-dir\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.150609 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-serving-cert\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.151106 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bba19ac5-eeb6-4536-93c2-22f110e6ce8a-service-ca\") pod \"console-f9d7485db-p69c7\" (UID: \"bba19ac5-eeb6-4536-93c2-22f110e6ce8a\") " pod="openshift-console/console-f9d7485db-p69c7" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.151539 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8372e439-9ab9-4ba3-80e8-c6d3959187f7-auth-proxy-config\") pod \"machine-approver-56656f9798-qm9sh\" (UID: \"8372e439-9ab9-4ba3-80e8-c6d3959187f7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qm9sh" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.151602 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/baaa32c9-702b-4a43-a7b7-7a98272f80f3-images\") pod \"machine-api-operator-5694c8668f-22lp8\" (UID: \"baaa32c9-702b-4a43-a7b7-7a98272f80f3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-22lp8" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.152400 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-registry-certificates\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.153036 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-serving-cert\") pod \"controller-manager-879f6c89f-vwt2s\" (UID: \"7d3be6cc-22e0-4a96-9fc2-aed5fe092a53\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vwt2s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.153074 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-config\") pod \"controller-manager-879f6c89f-vwt2s\" (UID: \"7d3be6cc-22e0-4a96-9fc2-aed5fe092a53\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vwt2s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.153292 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e29bd24-b2c3-434d-bdf8-6f05376fb87a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kkp72\" (UID: \"9e29bd24-b2c3-434d-bdf8-6f05376fb87a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kkp72" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.153700 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vwt2s\" (UID: \"7d3be6cc-22e0-4a96-9fc2-aed5fe092a53\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vwt2s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.153713 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8fbfb476-3201-4f1b-b6a1-9b2b858d37d4-serving-cert\") pod \"authentication-operator-69f744f599-94ccs\" (UID: \"8fbfb476-3201-4f1b-b6a1-9b2b858d37d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94ccs" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.153979 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-registry-tls\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.154122 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bba19ac5-eeb6-4536-93c2-22f110e6ce8a-console-serving-cert\") pod \"console-f9d7485db-p69c7\" (UID: \"bba19ac5-eeb6-4536-93c2-22f110e6ce8a\") " pod="openshift-console/console-f9d7485db-p69c7" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.154921 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5944eec5-8a3f-4fde-86d1-792b48c0a2bd-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-wfhqh\" (UID: \"5944eec5-8a3f-4fde-86d1-792b48c0a2bd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wfhqh" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.156034 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-audit\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.156165 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baaa32c9-702b-4a43-a7b7-7a98272f80f3-config\") pod \"machine-api-operator-5694c8668f-22lp8\" (UID: \"baaa32c9-702b-4a43-a7b7-7a98272f80f3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-22lp8" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.156508 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1921652d-7f9e-4bef-927f-fd616e41f865-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mnss5\" (UID: \"1921652d-7f9e-4bef-927f-fd616e41f865\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mnss5" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.157018 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.157265 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/baaa32c9-702b-4a43-a7b7-7a98272f80f3-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-22lp8\" (UID: \"baaa32c9-702b-4a43-a7b7-7a98272f80f3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-22lp8" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.157274 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.157639 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.157786 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.158082 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/20a37f55-0799-4e21-88d8-f50c736f01ef-metrics-tls\") pod \"dns-operator-744455d44c-qb6fp\" (UID: \"20a37f55-0799-4e21-88d8-f50c736f01ef\") " pod="openshift-dns-operator/dns-operator-744455d44c-qb6fp" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.158089 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bba19ac5-eeb6-4536-93c2-22f110e6ce8a-console-oauth-config\") pod \"console-f9d7485db-p69c7\" (UID: \"bba19ac5-eeb6-4536-93c2-22f110e6ce8a\") " pod="openshift-console/console-f9d7485db-p69c7" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.158481 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ea29f2c8-3e4b-42b5-8161-af03ae16ed23-encryption-config\") pod \"apiserver-7bbb656c7d-kn7nz\" (UID: \"ea29f2c8-3e4b-42b5-8161-af03ae16ed23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.158941 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8372e439-9ab9-4ba3-80e8-c6d3959187f7-machine-approver-tls\") pod \"machine-approver-56656f9798-qm9sh\" (UID: \"8372e439-9ab9-4ba3-80e8-c6d3959187f7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qm9sh" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.159355 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ea29f2c8-3e4b-42b5-8161-af03ae16ed23-etcd-client\") pod \"apiserver-7bbb656c7d-kn7nz\" (UID: \"ea29f2c8-3e4b-42b5-8161-af03ae16ed23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.159397 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.159467 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.162749 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e29bd24-b2c3-434d-bdf8-6f05376fb87a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kkp72\" (UID: \"9e29bd24-b2c3-434d-bdf8-6f05376fb87a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kkp72" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.176879 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.196300 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.216342 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.236931 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.237042 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:25 crc kubenswrapper[4923]: E0321 04:19:25.237124 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:25.737105468 +0000 UTC m=+130.890116555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.237197 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfgwh\" (UniqueName: \"kubernetes.io/projected/1877b8e8-87db-477a-a619-be2fbdc97d89-kube-api-access-rfgwh\") pod \"machine-config-controller-84d6567774-dpq9j\" (UID: \"1877b8e8-87db-477a-a619-be2fbdc97d89\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dpq9j" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.237229 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8ae3d7d8-8466-4519-91c0-48c7230d1388-mountpoint-dir\") pod \"csi-hostpathplugin-qgp9s\" (UID: \"8ae3d7d8-8466-4519-91c0-48c7230d1388\") " pod="hostpath-provisioner/csi-hostpathplugin-qgp9s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.237264 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc511693-2a38-4ca9-bf24-c7f8b7c47972-serving-cert\") pod \"console-operator-58897d9998-zljcw\" (UID: \"dc511693-2a38-4ca9-bf24-c7f8b7c47972\") " pod="openshift-console-operator/console-operator-58897d9998-zljcw" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.237286 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a338651b-7efe-4160-84a4-30471aadc1b7-apiservice-cert\") pod \"packageserver-d55dfcdfc-j26hr\" (UID: \"a338651b-7efe-4160-84a4-30471aadc1b7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j26hr" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.237305 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1877b8e8-87db-477a-a619-be2fbdc97d89-proxy-tls\") pod \"machine-config-controller-84d6567774-dpq9j\" (UID: \"1877b8e8-87db-477a-a619-be2fbdc97d89\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dpq9j" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.237344 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/972f1013-d1cb-44f2-b79d-baacfef4e939-srv-cert\") pod \"olm-operator-6b444d44fb-qv8h5\" (UID: \"972f1013-d1cb-44f2-b79d-baacfef4e939\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv8h5" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.237351 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8ae3d7d8-8466-4519-91c0-48c7230d1388-mountpoint-dir\") pod \"csi-hostpathplugin-qgp9s\" (UID: \"8ae3d7d8-8466-4519-91c0-48c7230d1388\") " pod="hostpath-provisioner/csi-hostpathplugin-qgp9s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.237368 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cf54c874-99d3-4bff-bef0-992bcee74002-auth-proxy-config\") pod \"machine-config-operator-74547568cd-s5spg\" (UID: \"cf54c874-99d3-4bff-bef0-992bcee74002\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5spg" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.237403 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8ae3d7d8-8466-4519-91c0-48c7230d1388-socket-dir\") pod \"csi-hostpathplugin-qgp9s\" (UID: \"8ae3d7d8-8466-4519-91c0-48c7230d1388\") " pod="hostpath-provisioner/csi-hostpathplugin-qgp9s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.237453 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8ae3d7d8-8466-4519-91c0-48c7230d1388-registration-dir\") pod \"csi-hostpathplugin-qgp9s\" (UID: \"8ae3d7d8-8466-4519-91c0-48c7230d1388\") " pod="hostpath-provisioner/csi-hostpathplugin-qgp9s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.237739 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8ae3d7d8-8466-4519-91c0-48c7230d1388-registration-dir\") pod \"csi-hostpathplugin-qgp9s\" (UID: \"8ae3d7d8-8466-4519-91c0-48c7230d1388\") " pod="hostpath-provisioner/csi-hostpathplugin-qgp9s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.237742 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8ae3d7d8-8466-4519-91c0-48c7230d1388-socket-dir\") pod \"csi-hostpathplugin-qgp9s\" (UID: \"8ae3d7d8-8466-4519-91c0-48c7230d1388\") " pod="hostpath-provisioner/csi-hostpathplugin-qgp9s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.237481 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09b6d403-50e8-4b92-aa1f-173ace636bca-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-68fmb\" (UID: \"09b6d403-50e8-4b92-aa1f-173ace636bca\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-68fmb" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.237800 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8ae3d7d8-8466-4519-91c0-48c7230d1388-plugins-dir\") pod \"csi-hostpathplugin-qgp9s\" (UID: \"8ae3d7d8-8466-4519-91c0-48c7230d1388\") " pod="hostpath-provisioner/csi-hostpathplugin-qgp9s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.237820 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp9ql\" (UniqueName: \"kubernetes.io/projected/3f1486ed-5a65-43a1-9a45-c02318d4d831-kube-api-access-lp9ql\") pod \"downloads-7954f5f757-2hqpc\" (UID: \"3f1486ed-5a65-43a1-9a45-c02318d4d831\") " pod="openshift-console/downloads-7954f5f757-2hqpc" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.237838 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6df34760-baa5-45b8-859e-c9935c6f5656-serving-cert\") pod \"openshift-config-operator-7777fb866f-blnd8\" (UID: \"6df34760-baa5-45b8-859e-c9935c6f5656\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-blnd8" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.237900 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8ae3d7d8-8466-4519-91c0-48c7230d1388-plugins-dir\") pod \"csi-hostpathplugin-qgp9s\" (UID: \"8ae3d7d8-8466-4519-91c0-48c7230d1388\") " pod="hostpath-provisioner/csi-hostpathplugin-qgp9s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.237856 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2c285ca-61ae-40f2-b349-7028faa03a00-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8gdjq\" (UID: \"e2c285ca-61ae-40f2-b349-7028faa03a00\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gdjq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.237951 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6df34760-baa5-45b8-859e-c9935c6f5656-available-featuregates\") pod \"openshift-config-operator-7777fb866f-blnd8\" (UID: \"6df34760-baa5-45b8-859e-c9935c6f5656\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-blnd8" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.237991 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaffcaf5-95c2-451c-af81-3472b875d910-config\") pod \"etcd-operator-b45778765-kvhtt\" (UID: \"eaffcaf5-95c2-451c-af81-3472b875d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kvhtt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.238012 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/972f1013-d1cb-44f2-b79d-baacfef4e939-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qv8h5\" (UID: \"972f1013-d1cb-44f2-b79d-baacfef4e939\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv8h5" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.238037 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cf54c874-99d3-4bff-bef0-992bcee74002-auth-proxy-config\") pod \"machine-config-operator-74547568cd-s5spg\" (UID: \"cf54c874-99d3-4bff-bef0-992bcee74002\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5spg" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.238087 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b56d396c-a7a2-466c-8c2a-973f24a9e3f1-profile-collector-cert\") pod \"catalog-operator-68c6474976-k2q79\" (UID: \"b56d396c-a7a2-466c-8c2a-973f24a9e3f1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k2q79" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.238107 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks6np\" (UniqueName: \"kubernetes.io/projected/cf54c874-99d3-4bff-bef0-992bcee74002-kube-api-access-ks6np\") pod \"machine-config-operator-74547568cd-s5spg\" (UID: \"cf54c874-99d3-4bff-bef0-992bcee74002\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5spg" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.238161 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgtj6\" (UniqueName: \"kubernetes.io/projected/39dc2e68-1df7-426f-aa50-15a542f6995b-kube-api-access-cgtj6\") pod \"marketplace-operator-79b997595-8bxdt\" (UID: \"39dc2e68-1df7-426f-aa50-15a542f6995b\") " pod="openshift-marketplace/marketplace-operator-79b997595-8bxdt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.238178 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaffcaf5-95c2-451c-af81-3472b875d910-serving-cert\") pod \"etcd-operator-b45778765-kvhtt\" (UID: \"eaffcaf5-95c2-451c-af81-3472b875d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kvhtt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.238264 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxkdf\" (UniqueName: \"kubernetes.io/projected/eaffcaf5-95c2-451c-af81-3472b875d910-kube-api-access-vxkdf\") pod \"etcd-operator-b45778765-kvhtt\" (UID: \"eaffcaf5-95c2-451c-af81-3472b875d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kvhtt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.238285 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bjgv\" (UniqueName: \"kubernetes.io/projected/972f1013-d1cb-44f2-b79d-baacfef4e939-kube-api-access-4bjgv\") pod \"olm-operator-6b444d44fb-qv8h5\" (UID: \"972f1013-d1cb-44f2-b79d-baacfef4e939\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv8h5" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.238280 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6df34760-baa5-45b8-859e-c9935c6f5656-available-featuregates\") pod \"openshift-config-operator-7777fb866f-blnd8\" (UID: \"6df34760-baa5-45b8-859e-c9935c6f5656\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-blnd8" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.238384 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7b214e59-6ee8-4ac4-8753-76ab84691b78-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6dbvj\" (UID: \"7b214e59-6ee8-4ac4-8753-76ab84691b78\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6dbvj" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.238413 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1877b8e8-87db-477a-a619-be2fbdc97d89-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-dpq9j\" (UID: \"1877b8e8-87db-477a-a619-be2fbdc97d89\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dpq9j" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.238449 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd84h\" (UniqueName: \"kubernetes.io/projected/93ea7292-1b26-4dc3-a5ba-d9b799c22264-kube-api-access-nd84h\") pod \"kube-storage-version-migrator-operator-b67b599dd-4lqw2\" (UID: \"93ea7292-1b26-4dc3-a5ba-d9b799c22264\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lqw2" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.238472 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6frhk\" (UniqueName: \"kubernetes.io/projected/b56d396c-a7a2-466c-8c2a-973f24a9e3f1-kube-api-access-6frhk\") pod \"catalog-operator-68c6474976-k2q79\" (UID: \"b56d396c-a7a2-466c-8c2a-973f24a9e3f1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k2q79" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.238552 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93ea7292-1b26-4dc3-a5ba-d9b799c22264-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4lqw2\" (UID: \"93ea7292-1b26-4dc3-a5ba-d9b799c22264\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lqw2" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.238572 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b56d396c-a7a2-466c-8c2a-973f24a9e3f1-srv-cert\") pod \"catalog-operator-68c6474976-k2q79\" (UID: \"b56d396c-a7a2-466c-8c2a-973f24a9e3f1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k2q79" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.238675 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwg8p\" (UniqueName: \"kubernetes.io/projected/b717fc18-bfdb-4e99-8fc0-ea7c905dd908-kube-api-access-zwg8p\") pod \"collect-profiles-29567775-vxpk6\" (UID: \"b717fc18-bfdb-4e99-8fc0-ea7c905dd908\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567775-vxpk6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.238697 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc511693-2a38-4ca9-bf24-c7f8b7c47972-config\") pod \"console-operator-58897d9998-zljcw\" (UID: \"dc511693-2a38-4ca9-bf24-c7f8b7c47972\") " pod="openshift-console-operator/console-operator-58897d9998-zljcw" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.238783 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b717fc18-bfdb-4e99-8fc0-ea7c905dd908-config-volume\") pod \"collect-profiles-29567775-vxpk6\" (UID: \"b717fc18-bfdb-4e99-8fc0-ea7c905dd908\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567775-vxpk6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.238801 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3764589a-2735-4154-bda2-c4e462e62202-config\") pod \"kube-controller-manager-operator-78b949d7b-vsxrf\" (UID: \"3764589a-2735-4154-bda2-c4e462e62202\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxrf" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.238817 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a338651b-7efe-4160-84a4-30471aadc1b7-webhook-cert\") pod \"packageserver-d55dfcdfc-j26hr\" (UID: \"a338651b-7efe-4160-84a4-30471aadc1b7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j26hr" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.239019 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cf54c874-99d3-4bff-bef0-992bcee74002-images\") pod \"machine-config-operator-74547568cd-s5spg\" (UID: \"cf54c874-99d3-4bff-bef0-992bcee74002\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5spg" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.239049 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3764589a-2735-4154-bda2-c4e462e62202-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vsxrf\" (UID: \"3764589a-2735-4154-bda2-c4e462e62202\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxrf" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.239067 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09b6d403-50e8-4b92-aa1f-173ace636bca-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-68fmb\" (UID: \"09b6d403-50e8-4b92-aa1f-173ace636bca\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-68fmb" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.239087 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39dc2e68-1df7-426f-aa50-15a542f6995b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8bxdt\" (UID: \"39dc2e68-1df7-426f-aa50-15a542f6995b\") " pod="openshift-marketplace/marketplace-operator-79b997595-8bxdt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.239104 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e2c285ca-61ae-40f2-b349-7028faa03a00-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8gdjq\" (UID: \"e2c285ca-61ae-40f2-b349-7028faa03a00\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gdjq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.239119 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b214e59-6ee8-4ac4-8753-76ab84691b78-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6dbvj\" (UID: \"7b214e59-6ee8-4ac4-8753-76ab84691b78\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6dbvj" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.239134 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/39dc2e68-1df7-426f-aa50-15a542f6995b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8bxdt\" (UID: \"39dc2e68-1df7-426f-aa50-15a542f6995b\") " pod="openshift-marketplace/marketplace-operator-79b997595-8bxdt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.239152 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1877b8e8-87db-477a-a619-be2fbdc97d89-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-dpq9j\" (UID: \"1877b8e8-87db-477a-a619-be2fbdc97d89\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dpq9j" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.239158 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e2c285ca-61ae-40f2-b349-7028faa03a00-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8gdjq\" (UID: \"e2c285ca-61ae-40f2-b349-7028faa03a00\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gdjq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.239178 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3764589a-2735-4154-bda2-c4e462e62202-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vsxrf\" (UID: \"3764589a-2735-4154-bda2-c4e462e62202\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxrf" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.239201 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a338651b-7efe-4160-84a4-30471aadc1b7-tmpfs\") pod \"packageserver-d55dfcdfc-j26hr\" (UID: \"a338651b-7efe-4160-84a4-30471aadc1b7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j26hr" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.239215 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaffcaf5-95c2-451c-af81-3472b875d910-config\") pod \"etcd-operator-b45778765-kvhtt\" (UID: \"eaffcaf5-95c2-451c-af81-3472b875d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kvhtt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.239228 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b214e59-6ee8-4ac4-8753-76ab84691b78-config\") pod \"kube-apiserver-operator-766d6c64bb-6dbvj\" (UID: \"7b214e59-6ee8-4ac4-8753-76ab84691b78\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6dbvj" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.239388 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw6nk\" (UniqueName: \"kubernetes.io/projected/6df34760-baa5-45b8-859e-c9935c6f5656-kube-api-access-vw6nk\") pod \"openshift-config-operator-7777fb866f-blnd8\" (UID: \"6df34760-baa5-45b8-859e-c9935c6f5656\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-blnd8" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.239446 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/eaffcaf5-95c2-451c-af81-3472b875d910-etcd-service-ca\") pod \"etcd-operator-b45778765-kvhtt\" (UID: \"eaffcaf5-95c2-451c-af81-3472b875d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kvhtt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.239506 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc511693-2a38-4ca9-bf24-c7f8b7c47972-trusted-ca\") pod \"console-operator-58897d9998-zljcw\" (UID: \"dc511693-2a38-4ca9-bf24-c7f8b7c47972\") " pod="openshift-console-operator/console-operator-58897d9998-zljcw" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.239572 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8ae3d7d8-8466-4519-91c0-48c7230d1388-csi-data-dir\") pod \"csi-hostpathplugin-qgp9s\" (UID: \"8ae3d7d8-8466-4519-91c0-48c7230d1388\") " pod="hostpath-provisioner/csi-hostpathplugin-qgp9s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.239633 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.239689 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxsvz\" (UniqueName: \"kubernetes.io/projected/e2c285ca-61ae-40f2-b349-7028faa03a00-kube-api-access-zxsvz\") pod \"cluster-image-registry-operator-dc59b4c8b-8gdjq\" (UID: \"e2c285ca-61ae-40f2-b349-7028faa03a00\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gdjq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.239725 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09b6d403-50e8-4b92-aa1f-173ace636bca-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-68fmb\" (UID: \"09b6d403-50e8-4b92-aa1f-173ace636bca\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-68fmb" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.239761 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw66s\" (UniqueName: \"kubernetes.io/projected/dc511693-2a38-4ca9-bf24-c7f8b7c47972-kube-api-access-sw66s\") pod \"console-operator-58897d9998-zljcw\" (UID: \"dc511693-2a38-4ca9-bf24-c7f8b7c47972\") " pod="openshift-console-operator/console-operator-58897d9998-zljcw" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.239800 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmbql\" (UniqueName: \"kubernetes.io/projected/8ae3d7d8-8466-4519-91c0-48c7230d1388-kube-api-access-hmbql\") pod \"csi-hostpathplugin-qgp9s\" (UID: \"8ae3d7d8-8466-4519-91c0-48c7230d1388\") " pod="hostpath-provisioner/csi-hostpathplugin-qgp9s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.239839 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cf54c874-99d3-4bff-bef0-992bcee74002-proxy-tls\") pod \"machine-config-operator-74547568cd-s5spg\" (UID: \"cf54c874-99d3-4bff-bef0-992bcee74002\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5spg" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.239873 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/eaffcaf5-95c2-451c-af81-3472b875d910-etcd-ca\") pod \"etcd-operator-b45778765-kvhtt\" (UID: \"eaffcaf5-95c2-451c-af81-3472b875d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kvhtt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.239908 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/eaffcaf5-95c2-451c-af81-3472b875d910-etcd-client\") pod \"etcd-operator-b45778765-kvhtt\" (UID: \"eaffcaf5-95c2-451c-af81-3472b875d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kvhtt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.239940 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b717fc18-bfdb-4e99-8fc0-ea7c905dd908-secret-volume\") pod \"collect-profiles-29567775-vxpk6\" (UID: \"b717fc18-bfdb-4e99-8fc0-ea7c905dd908\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567775-vxpk6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.239977 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvks9\" (UniqueName: \"kubernetes.io/projected/a338651b-7efe-4160-84a4-30471aadc1b7-kube-api-access-tvks9\") pod \"packageserver-d55dfcdfc-j26hr\" (UID: \"a338651b-7efe-4160-84a4-30471aadc1b7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j26hr" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.240014 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ea7292-1b26-4dc3-a5ba-d9b799c22264-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4lqw2\" (UID: \"93ea7292-1b26-4dc3-a5ba-d9b799c22264\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lqw2" Mar 21 04:19:25 crc kubenswrapper[4923]: E0321 04:19:25.240065 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:25.740041886 +0000 UTC m=+130.893053013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.240193 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e2c285ca-61ae-40f2-b349-7028faa03a00-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8gdjq\" (UID: \"e2c285ca-61ae-40f2-b349-7028faa03a00\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gdjq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.240249 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a338651b-7efe-4160-84a4-30471aadc1b7-tmpfs\") pod \"packageserver-d55dfcdfc-j26hr\" (UID: \"a338651b-7efe-4160-84a4-30471aadc1b7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j26hr" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.240267 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8ae3d7d8-8466-4519-91c0-48c7230d1388-csi-data-dir\") pod \"csi-hostpathplugin-qgp9s\" (UID: \"8ae3d7d8-8466-4519-91c0-48c7230d1388\") " pod="hostpath-provisioner/csi-hostpathplugin-qgp9s" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.240660 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/eaffcaf5-95c2-451c-af81-3472b875d910-etcd-service-ca\") pod \"etcd-operator-b45778765-kvhtt\" (UID: \"eaffcaf5-95c2-451c-af81-3472b875d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kvhtt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.241644 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/eaffcaf5-95c2-451c-af81-3472b875d910-etcd-ca\") pod \"etcd-operator-b45778765-kvhtt\" (UID: \"eaffcaf5-95c2-451c-af81-3472b875d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kvhtt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.242160 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaffcaf5-95c2-451c-af81-3472b875d910-serving-cert\") pod \"etcd-operator-b45778765-kvhtt\" (UID: \"eaffcaf5-95c2-451c-af81-3472b875d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kvhtt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.245313 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09b6d403-50e8-4b92-aa1f-173ace636bca-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-68fmb\" (UID: \"09b6d403-50e8-4b92-aa1f-173ace636bca\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-68fmb" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.245472 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/eaffcaf5-95c2-451c-af81-3472b875d910-etcd-client\") pod \"etcd-operator-b45778765-kvhtt\" (UID: \"eaffcaf5-95c2-451c-af81-3472b875d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kvhtt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.257914 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.277412 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.297115 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.317041 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.319880 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09b6d403-50e8-4b92-aa1f-173ace636bca-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-68fmb\" (UID: \"09b6d403-50e8-4b92-aa1f-173ace636bca\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-68fmb" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.336495 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.341656 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:25 crc kubenswrapper[4923]: E0321 04:19:25.341815 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:25.84179389 +0000 UTC m=+130.994804987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.342118 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:25 crc kubenswrapper[4923]: E0321 04:19:25.342565 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:25.842545192 +0000 UTC m=+130.995556289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.357606 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.376367 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.397044 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.417504 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.420151 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/972f1013-d1cb-44f2-b79d-baacfef4e939-srv-cert\") pod \"olm-operator-6b444d44fb-qv8h5\" (UID: \"972f1013-d1cb-44f2-b79d-baacfef4e939\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv8h5" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.437613 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.442804 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:25 crc kubenswrapper[4923]: E0321 04:19:25.442995 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:25.942957947 +0000 UTC m=+131.095969074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.443472 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.443789 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/972f1013-d1cb-44f2-b79d-baacfef4e939-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qv8h5\" (UID: \"972f1013-d1cb-44f2-b79d-baacfef4e939\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv8h5" Mar 21 04:19:25 crc kubenswrapper[4923]: E0321 04:19:25.443910 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:25.943888315 +0000 UTC m=+131.096899502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.444457 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b56d396c-a7a2-466c-8c2a-973f24a9e3f1-profile-collector-cert\") pod \"catalog-operator-68c6474976-k2q79\" (UID: \"b56d396c-a7a2-466c-8c2a-973f24a9e3f1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k2q79" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.448855 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b717fc18-bfdb-4e99-8fc0-ea7c905dd908-secret-volume\") pod \"collect-profiles-29567775-vxpk6\" (UID: \"b717fc18-bfdb-4e99-8fc0-ea7c905dd908\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567775-vxpk6" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.457125 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.477427 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.496674 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.505864 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/39dc2e68-1df7-426f-aa50-15a542f6995b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8bxdt\" (UID: \"39dc2e68-1df7-426f-aa50-15a542f6995b\") " pod="openshift-marketplace/marketplace-operator-79b997595-8bxdt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.528132 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.531617 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39dc2e68-1df7-426f-aa50-15a542f6995b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8bxdt\" (UID: \"39dc2e68-1df7-426f-aa50-15a542f6995b\") " pod="openshift-marketplace/marketplace-operator-79b997595-8bxdt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.536815 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.545787 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:25 crc kubenswrapper[4923]: E0321 04:19:25.545996 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:26.045937108 +0000 UTC m=+131.198948235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.546623 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:25 crc kubenswrapper[4923]: E0321 04:19:25.547195 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:26.047176335 +0000 UTC m=+131.200187462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.557190 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.577529 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.597895 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.603723 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93ea7292-1b26-4dc3-a5ba-d9b799c22264-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4lqw2\" (UID: \"93ea7292-1b26-4dc3-a5ba-d9b799c22264\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lqw2" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.617267 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.622918 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2c285ca-61ae-40f2-b349-7028faa03a00-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8gdjq\" (UID: \"e2c285ca-61ae-40f2-b349-7028faa03a00\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gdjq" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.637763 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.641526 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93ea7292-1b26-4dc3-a5ba-d9b799c22264-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4lqw2\" (UID: \"93ea7292-1b26-4dc3-a5ba-d9b799c22264\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lqw2" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.648278 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:25 crc kubenswrapper[4923]: E0321 04:19:25.648486 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:26.148458145 +0000 UTC m=+131.301469272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.649028 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:25 crc kubenswrapper[4923]: E0321 04:19:25.649518 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:26.149487336 +0000 UTC m=+131.302498463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.657963 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.677266 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.696982 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.718211 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.724837 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3764589a-2735-4154-bda2-c4e462e62202-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vsxrf\" (UID: \"3764589a-2735-4154-bda2-c4e462e62202\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxrf" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.738130 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.751149 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:25 crc kubenswrapper[4923]: E0321 04:19:25.751382 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:26.251355154 +0000 UTC m=+131.404366271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.751596 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:25 crc kubenswrapper[4923]: E0321 04:19:25.752027 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:26.252011693 +0000 UTC m=+131.405022810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.757720 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.760936 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3764589a-2735-4154-bda2-c4e462e62202-config\") pod \"kube-controller-manager-operator-78b949d7b-vsxrf\" (UID: \"3764589a-2735-4154-bda2-c4e462e62202\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxrf" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.778239 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.797194 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.817807 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.822701 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b56d396c-a7a2-466c-8c2a-973f24a9e3f1-srv-cert\") pod \"catalog-operator-68c6474976-k2q79\" (UID: \"b56d396c-a7a2-466c-8c2a-973f24a9e3f1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k2q79" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.844277 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.852732 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:25 crc kubenswrapper[4923]: E0321 04:19:25.852988 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:26.352956764 +0000 UTC m=+131.505967891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.853116 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:25 crc kubenswrapper[4923]: E0321 04:19:25.853593 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:26.353584653 +0000 UTC m=+131.506595740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.853756 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc511693-2a38-4ca9-bf24-c7f8b7c47972-trusted-ca\") pod \"console-operator-58897d9998-zljcw\" (UID: \"dc511693-2a38-4ca9-bf24-c7f8b7c47972\") " pod="openshift-console-operator/console-operator-58897d9998-zljcw" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.856206 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.877605 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.896493 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.902801 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc511693-2a38-4ca9-bf24-c7f8b7c47972-serving-cert\") pod \"console-operator-58897d9998-zljcw\" (UID: \"dc511693-2a38-4ca9-bf24-c7f8b7c47972\") " pod="openshift-console-operator/console-operator-58897d9998-zljcw" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.918623 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.922095 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc511693-2a38-4ca9-bf24-c7f8b7c47972-config\") pod \"console-operator-58897d9998-zljcw\" (UID: \"dc511693-2a38-4ca9-bf24-c7f8b7c47972\") " pod="openshift-console-operator/console-operator-58897d9998-zljcw" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.937115 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.954147 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:25 crc kubenswrapper[4923]: E0321 04:19:25.954252 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:26.454232974 +0000 UTC m=+131.607244071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.954618 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:25 crc kubenswrapper[4923]: E0321 04:19:25.954901 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:26.454893054 +0000 UTC m=+131.607904141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.954983 4923 request.go:700] Waited for 1.00572199s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackageserver-service-cert&limit=500&resourceVersion=0 Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.957137 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.963716 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a338651b-7efe-4160-84a4-30471aadc1b7-apiservice-cert\") pod \"packageserver-d55dfcdfc-j26hr\" (UID: \"a338651b-7efe-4160-84a4-30471aadc1b7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j26hr" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.964094 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a338651b-7efe-4160-84a4-30471aadc1b7-webhook-cert\") pod \"packageserver-d55dfcdfc-j26hr\" (UID: \"a338651b-7efe-4160-84a4-30471aadc1b7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j26hr" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.978988 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.992414 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1877b8e8-87db-477a-a619-be2fbdc97d89-proxy-tls\") pod \"machine-config-controller-84d6567774-dpq9j\" (UID: \"1877b8e8-87db-477a-a619-be2fbdc97d89\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dpq9j" Mar 21 04:19:25 crc kubenswrapper[4923]: I0321 04:19:25.997635 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.017181 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.038392 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.056140 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:26 crc kubenswrapper[4923]: E0321 04:19:26.056471 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:26.556420781 +0000 UTC m=+131.709431928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.057400 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.058282 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 21 04:19:26 crc kubenswrapper[4923]: E0321 04:19:26.058585 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:26.558557325 +0000 UTC m=+131.711568432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.072641 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6df34760-baa5-45b8-859e-c9935c6f5656-serving-cert\") pod \"openshift-config-operator-7777fb866f-blnd8\" (UID: \"6df34760-baa5-45b8-859e-c9935c6f5656\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-blnd8" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.076893 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.097064 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.117567 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.141746 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.158119 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.162154 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:26 crc kubenswrapper[4923]: E0321 04:19:26.162410 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:26.662375601 +0000 UTC m=+131.815386728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.163209 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:26 crc kubenswrapper[4923]: E0321 04:19:26.163666 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:26.663645969 +0000 UTC m=+131.816657096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.177001 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.196898 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.201166 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cf54c874-99d3-4bff-bef0-992bcee74002-images\") pod \"machine-config-operator-74547568cd-s5spg\" (UID: \"cf54c874-99d3-4bff-bef0-992bcee74002\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5spg" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.217649 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.225408 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cf54c874-99d3-4bff-bef0-992bcee74002-proxy-tls\") pod \"machine-config-operator-74547568cd-s5spg\" (UID: \"cf54c874-99d3-4bff-bef0-992bcee74002\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5spg" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.237469 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 21 04:19:26 crc kubenswrapper[4923]: E0321 04:19:26.239782 4923 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Mar 21 04:19:26 crc kubenswrapper[4923]: E0321 04:19:26.239867 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b214e59-6ee8-4ac4-8753-76ab84691b78-config podName:7b214e59-6ee8-4ac4-8753-76ab84691b78 nodeName:}" failed. No retries permitted until 2026-03-21 04:19:26.739843029 +0000 UTC m=+131.892854146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/7b214e59-6ee8-4ac4-8753-76ab84691b78-config") pod "kube-apiserver-operator-766d6c64bb-6dbvj" (UID: "7b214e59-6ee8-4ac4-8753-76ab84691b78") : failed to sync configmap cache: timed out waiting for the condition Mar 21 04:19:26 crc kubenswrapper[4923]: E0321 04:19:26.239988 4923 secret.go:188] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 21 04:19:26 crc kubenswrapper[4923]: E0321 04:19:26.239993 4923 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Mar 21 04:19:26 crc kubenswrapper[4923]: E0321 04:19:26.240259 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b214e59-6ee8-4ac4-8753-76ab84691b78-serving-cert podName:7b214e59-6ee8-4ac4-8753-76ab84691b78 nodeName:}" failed. No retries permitted until 2026-03-21 04:19:26.740233631 +0000 UTC m=+131.893244728 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7b214e59-6ee8-4ac4-8753-76ab84691b78-serving-cert") pod "kube-apiserver-operator-766d6c64bb-6dbvj" (UID: "7b214e59-6ee8-4ac4-8753-76ab84691b78") : failed to sync secret cache: timed out waiting for the condition Mar 21 04:19:26 crc kubenswrapper[4923]: E0321 04:19:26.240395 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b717fc18-bfdb-4e99-8fc0-ea7c905dd908-config-volume podName:b717fc18-bfdb-4e99-8fc0-ea7c905dd908 nodeName:}" failed. No retries permitted until 2026-03-21 04:19:26.740376515 +0000 UTC m=+131.893387692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b717fc18-bfdb-4e99-8fc0-ea7c905dd908-config-volume") pod "collect-profiles-29567775-vxpk6" (UID: "b717fc18-bfdb-4e99-8fc0-ea7c905dd908") : failed to sync configmap cache: timed out waiting for the condition Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.257547 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.264910 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:26 crc kubenswrapper[4923]: E0321 04:19:26.265141 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:26.765084754 +0000 UTC m=+131.918095841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.265633 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:26 crc kubenswrapper[4923]: E0321 04:19:26.265943 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:26.76593489 +0000 UTC m=+131.918945977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.276354 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.296925 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.316793 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.337341 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.356792 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.367104 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:26 crc kubenswrapper[4923]: E0321 04:19:26.367405 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:26.867378925 +0000 UTC m=+132.020390032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.368074 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:26 crc kubenswrapper[4923]: E0321 04:19:26.368582 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:26.868570881 +0000 UTC m=+132.021581978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.376429 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.396704 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.417703 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.437017 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.456826 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.469732 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:26 crc kubenswrapper[4923]: E0321 04:19:26.469949 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:26.969912233 +0000 UTC m=+132.122923380 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.470111 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:26 crc kubenswrapper[4923]: E0321 04:19:26.470507 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:26.97048875 +0000 UTC m=+132.123499847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.477641 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.497133 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.517464 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.538219 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.557119 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.571268 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:26 crc kubenswrapper[4923]: E0321 04:19:26.571591 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:27.071547803 +0000 UTC m=+132.224558920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.572050 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:26 crc kubenswrapper[4923]: E0321 04:19:26.572573 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:27.072551993 +0000 UTC m=+132.225563110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.577002 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.597492 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.617857 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.637381 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.657414 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.673431 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:26 crc kubenswrapper[4923]: E0321 04:19:26.673645 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:27.173617027 +0000 UTC m=+132.326628154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.674476 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:26 crc kubenswrapper[4923]: E0321 04:19:26.674932 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:27.174915386 +0000 UTC m=+132.327926513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.676899 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.718039 4923 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.737190 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.756504 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.775455 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.776019 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b717fc18-bfdb-4e99-8fc0-ea7c905dd908-config-volume\") pod \"collect-profiles-29567775-vxpk6\" (UID: \"b717fc18-bfdb-4e99-8fc0-ea7c905dd908\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567775-vxpk6" Mar 21 04:19:26 crc kubenswrapper[4923]: E0321 04:19:26.776146 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:27.276109714 +0000 UTC m=+132.429120851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.776255 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b214e59-6ee8-4ac4-8753-76ab84691b78-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6dbvj\" (UID: \"7b214e59-6ee8-4ac4-8753-76ab84691b78\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6dbvj" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.776382 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b214e59-6ee8-4ac4-8753-76ab84691b78-config\") pod \"kube-apiserver-operator-766d6c64bb-6dbvj\" (UID: \"7b214e59-6ee8-4ac4-8753-76ab84691b78\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6dbvj" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.776568 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.776634 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 21 04:19:26 crc kubenswrapper[4923]: E0321 04:19:26.777300 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:27.277237808 +0000 UTC m=+132.430248935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.778146 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b717fc18-bfdb-4e99-8fc0-ea7c905dd908-config-volume\") pod \"collect-profiles-29567775-vxpk6\" (UID: \"b717fc18-bfdb-4e99-8fc0-ea7c905dd908\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567775-vxpk6" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.778388 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b214e59-6ee8-4ac4-8753-76ab84691b78-config\") pod \"kube-apiserver-operator-766d6c64bb-6dbvj\" (UID: \"7b214e59-6ee8-4ac4-8753-76ab84691b78\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6dbvj" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.781481 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b214e59-6ee8-4ac4-8753-76ab84691b78-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6dbvj\" (UID: \"7b214e59-6ee8-4ac4-8753-76ab84691b78\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6dbvj" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.798777 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.817378 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.837807 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.857705 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.877270 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.877716 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:26 crc kubenswrapper[4923]: E0321 04:19:26.878575 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:27.378552079 +0000 UTC m=+132.531563196 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.897617 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.917366 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.955601 4923 request.go:700] Waited for 1.823396404s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/machine-api-operator/token Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.977878 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8v49\" (UniqueName: \"kubernetes.io/projected/baaa32c9-702b-4a43-a7b7-7a98272f80f3-kube-api-access-p8v49\") pod \"machine-api-operator-5694c8668f-22lp8\" (UID: \"baaa32c9-702b-4a43-a7b7-7a98272f80f3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-22lp8" Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.986587 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:26 crc kubenswrapper[4923]: E0321 04:19:26.987099 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:27.487073655 +0000 UTC m=+132.640084782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:26 crc kubenswrapper[4923]: I0321 04:19:26.998502 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm5vj\" (UniqueName: \"kubernetes.io/projected/9e29bd24-b2c3-434d-bdf8-6f05376fb87a-kube-api-access-xm5vj\") pod \"openshift-apiserver-operator-796bbdcf4f-kkp72\" (UID: \"9e29bd24-b2c3-434d-bdf8-6f05376fb87a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kkp72" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.017770 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxhst\" (UniqueName: \"kubernetes.io/projected/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-kube-api-access-fxhst\") pod \"controller-manager-879f6c89f-vwt2s\" (UID: \"7d3be6cc-22e0-4a96-9fc2-aed5fe092a53\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vwt2s" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.042498 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbqrd\" (UniqueName: \"kubernetes.io/projected/bd68bc5e-ac31-4f87-8e40-d7e4d7696106-kube-api-access-kbqrd\") pod \"apiserver-76f77b778f-5bhvd\" (UID: \"bd68bc5e-ac31-4f87-8e40-d7e4d7696106\") " pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.049591 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vwt2s" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.065044 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmx2s\" (UniqueName: \"kubernetes.io/projected/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-kube-api-access-lmx2s\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.076942 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t77bz\" (UniqueName: \"kubernetes.io/projected/ea29f2c8-3e4b-42b5-8161-af03ae16ed23-kube-api-access-t77bz\") pod \"apiserver-7bbb656c7d-kn7nz\" (UID: \"ea29f2c8-3e4b-42b5-8161-af03ae16ed23\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.081905 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-22lp8" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.087769 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:27 crc kubenswrapper[4923]: E0321 04:19:27.088688 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:27.588672204 +0000 UTC m=+132.741683291 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.102554 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqnqb\" (UniqueName: \"kubernetes.io/projected/8fbfb476-3201-4f1b-b6a1-9b2b858d37d4-kube-api-access-hqnqb\") pod \"authentication-operator-69f744f599-94ccs\" (UID: \"8fbfb476-3201-4f1b-b6a1-9b2b858d37d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-94ccs" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.113275 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4xpk\" (UniqueName: \"kubernetes.io/projected/20a37f55-0799-4e21-88d8-f50c736f01ef-kube-api-access-x4xpk\") pod \"dns-operator-744455d44c-qb6fp\" (UID: \"20a37f55-0799-4e21-88d8-f50c736f01ef\") " pod="openshift-dns-operator/dns-operator-744455d44c-qb6fp" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.113411 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.135098 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/57f89f97-10d8-45d1-a69f-34d09c5224c9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-q2rkk\" (UID: \"57f89f97-10d8-45d1-a69f-34d09c5224c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q2rkk" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.154773 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttsdv\" (UniqueName: \"kubernetes.io/projected/5944eec5-8a3f-4fde-86d1-792b48c0a2bd-kube-api-access-ttsdv\") pod \"cluster-samples-operator-665b6dd947-wfhqh\" (UID: \"5944eec5-8a3f-4fde-86d1-792b48c0a2bd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wfhqh" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.159621 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wfhqh" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.173078 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs2bv\" (UniqueName: \"kubernetes.io/projected/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-kube-api-access-bs2bv\") pod \"oauth-openshift-558db77b4-5jpr6\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.189935 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:27 crc kubenswrapper[4923]: E0321 04:19:27.190518 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:27.690506221 +0000 UTC m=+132.843517308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.193774 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-bound-sa-token\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.212106 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld9g7\" (UniqueName: \"kubernetes.io/projected/8372e439-9ab9-4ba3-80e8-c6d3959187f7-kube-api-access-ld9g7\") pod \"machine-approver-56656f9798-qm9sh\" (UID: \"8372e439-9ab9-4ba3-80e8-c6d3959187f7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qm9sh" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.241720 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kkp72" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.247353 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8n5r\" (UniqueName: \"kubernetes.io/projected/57f89f97-10d8-45d1-a69f-34d09c5224c9-kube-api-access-f8n5r\") pod \"ingress-operator-5b745b69d9-q2rkk\" (UID: \"57f89f97-10d8-45d1-a69f-34d09c5224c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q2rkk" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.273770 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvwfl\" (UniqueName: \"kubernetes.io/projected/bba19ac5-eeb6-4536-93c2-22f110e6ce8a-kube-api-access-fvwfl\") pod \"console-f9d7485db-p69c7\" (UID: \"bba19ac5-eeb6-4536-93c2-22f110e6ce8a\") " pod="openshift-console/console-f9d7485db-p69c7" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.276276 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.280167 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swljp\" (UniqueName: \"kubernetes.io/projected/1921652d-7f9e-4bef-927f-fd616e41f865-kube-api-access-swljp\") pod \"openshift-controller-manager-operator-756b6f6bc6-mnss5\" (UID: \"1921652d-7f9e-4bef-927f-fd616e41f865\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mnss5" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.290999 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:27 crc kubenswrapper[4923]: E0321 04:19:27.291586 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:27.791563825 +0000 UTC m=+132.944574922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.298937 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qb6fp" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.300924 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfgwh\" (UniqueName: \"kubernetes.io/projected/1877b8e8-87db-477a-a619-be2fbdc97d89-kube-api-access-rfgwh\") pod \"machine-config-controller-84d6567774-dpq9j\" (UID: \"1877b8e8-87db-477a-a619-be2fbdc97d89\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dpq9j" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.304698 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dpq9j" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.324228 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09b6d403-50e8-4b92-aa1f-173ace636bca-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-68fmb\" (UID: \"09b6d403-50e8-4b92-aa1f-173ace636bca\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-68fmb" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.331299 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-94ccs" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.335666 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp9ql\" (UniqueName: \"kubernetes.io/projected/3f1486ed-5a65-43a1-9a45-c02318d4d831-kube-api-access-lp9ql\") pod \"downloads-7954f5f757-2hqpc\" (UID: \"3f1486ed-5a65-43a1-9a45-c02318d4d831\") " pod="openshift-console/downloads-7954f5f757-2hqpc" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.347878 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-22lp8"] Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.358926 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks6np\" (UniqueName: \"kubernetes.io/projected/cf54c874-99d3-4bff-bef0-992bcee74002-kube-api-access-ks6np\") pod \"machine-config-operator-74547568cd-s5spg\" (UID: \"cf54c874-99d3-4bff-bef0-992bcee74002\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5spg" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.371064 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgtj6\" (UniqueName: \"kubernetes.io/projected/39dc2e68-1df7-426f-aa50-15a542f6995b-kube-api-access-cgtj6\") pod \"marketplace-operator-79b997595-8bxdt\" (UID: \"39dc2e68-1df7-426f-aa50-15a542f6995b\") " pod="openshift-marketplace/marketplace-operator-79b997595-8bxdt" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.378926 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2hqpc" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.388726 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.390721 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vwt2s"] Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.391884 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxkdf\" (UniqueName: \"kubernetes.io/projected/eaffcaf5-95c2-451c-af81-3472b875d910-kube-api-access-vxkdf\") pod \"etcd-operator-b45778765-kvhtt\" (UID: \"eaffcaf5-95c2-451c-af81-3472b875d910\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kvhtt" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.392461 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:27 crc kubenswrapper[4923]: E0321 04:19:27.392787 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:27.892775543 +0000 UTC m=+133.045786630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.414878 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7b214e59-6ee8-4ac4-8753-76ab84691b78-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6dbvj\" (UID: \"7b214e59-6ee8-4ac4-8753-76ab84691b78\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6dbvj" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.427865 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mnss5" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.432683 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bjgv\" (UniqueName: \"kubernetes.io/projected/972f1013-d1cb-44f2-b79d-baacfef4e939-kube-api-access-4bjgv\") pod \"olm-operator-6b444d44fb-qv8h5\" (UID: \"972f1013-d1cb-44f2-b79d-baacfef4e939\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv8h5" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.439233 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qm9sh" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.445855 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz"] Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.451244 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q2rkk" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.457863 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd84h\" (UniqueName: \"kubernetes.io/projected/93ea7292-1b26-4dc3-a5ba-d9b799c22264-kube-api-access-nd84h\") pod \"kube-storage-version-migrator-operator-b67b599dd-4lqw2\" (UID: \"93ea7292-1b26-4dc3-a5ba-d9b799c22264\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lqw2" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.472480 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6frhk\" (UniqueName: \"kubernetes.io/projected/b56d396c-a7a2-466c-8c2a-973f24a9e3f1-kube-api-access-6frhk\") pod \"catalog-operator-68c6474976-k2q79\" (UID: \"b56d396c-a7a2-466c-8c2a-973f24a9e3f1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k2q79" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.473610 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-kvhtt" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.490708 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-68fmb" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.491966 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwg8p\" (UniqueName: \"kubernetes.io/projected/b717fc18-bfdb-4e99-8fc0-ea7c905dd908-kube-api-access-zwg8p\") pod \"collect-profiles-29567775-vxpk6\" (UID: \"b717fc18-bfdb-4e99-8fc0-ea7c905dd908\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567775-vxpk6" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.493883 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:27 crc kubenswrapper[4923]: E0321 04:19:27.494213 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:27.994180857 +0000 UTC m=+133.147191944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.503366 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv8h5" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.510265 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8bxdt" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.512668 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e2c285ca-61ae-40f2-b349-7028faa03a00-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8gdjq\" (UID: \"e2c285ca-61ae-40f2-b349-7028faa03a00\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gdjq" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.514565 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kkp72"] Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.516606 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lqw2" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.523850 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wfhqh"] Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.532358 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3764589a-2735-4154-bda2-c4e462e62202-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vsxrf\" (UID: \"3764589a-2735-4154-bda2-c4e462e62202\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxrf" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.553160 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-p69c7" Mar 21 04:19:27 crc kubenswrapper[4923]: W0321 04:19:27.553299 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e29bd24_b2c3_434d_bdf8_6f05376fb87a.slice/crio-085ec835217dca00415ee320dac1bab7b47afbc72cdaaedde2fd23a20d2da9dc WatchSource:0}: Error finding container 085ec835217dca00415ee320dac1bab7b47afbc72cdaaedde2fd23a20d2da9dc: Status 404 returned error can't find the container with id 085ec835217dca00415ee320dac1bab7b47afbc72cdaaedde2fd23a20d2da9dc Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.554864 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw6nk\" (UniqueName: \"kubernetes.io/projected/6df34760-baa5-45b8-859e-c9935c6f5656-kube-api-access-vw6nk\") pod \"openshift-config-operator-7777fb866f-blnd8\" (UID: \"6df34760-baa5-45b8-859e-c9935c6f5656\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-blnd8" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.565477 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5bhvd"] Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.574381 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxsvz\" (UniqueName: \"kubernetes.io/projected/e2c285ca-61ae-40f2-b349-7028faa03a00-kube-api-access-zxsvz\") pod \"cluster-image-registry-operator-dc59b4c8b-8gdjq\" (UID: \"e2c285ca-61ae-40f2-b349-7028faa03a00\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gdjq" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.586293 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k2q79" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.595995 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:27 crc kubenswrapper[4923]: E0321 04:19:27.596483 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:28.096464187 +0000 UTC m=+133.249475344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.599770 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw66s\" (UniqueName: \"kubernetes.io/projected/dc511693-2a38-4ca9-bf24-c7f8b7c47972-kube-api-access-sw66s\") pod \"console-operator-58897d9998-zljcw\" (UID: \"dc511693-2a38-4ca9-bf24-c7f8b7c47972\") " pod="openshift-console-operator/console-operator-58897d9998-zljcw" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.609961 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-blnd8" Mar 21 04:19:27 crc kubenswrapper[4923]: W0321 04:19:27.612688 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd68bc5e_ac31_4f87_8e40_d7e4d7696106.slice/crio-62ebbcde0f2eaec792c13e71e970fb45f38ac07ed3b3e37ff02ae52c4f9d1ecc WatchSource:0}: Error finding container 62ebbcde0f2eaec792c13e71e970fb45f38ac07ed3b3e37ff02ae52c4f9d1ecc: Status 404 returned error can't find the container with id 62ebbcde0f2eaec792c13e71e970fb45f38ac07ed3b3e37ff02ae52c4f9d1ecc Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.614269 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qb6fp"] Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.624211 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5spg" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.625560 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvks9\" (UniqueName: \"kubernetes.io/projected/a338651b-7efe-4160-84a4-30471aadc1b7-kube-api-access-tvks9\") pod \"packageserver-d55dfcdfc-j26hr\" (UID: \"a338651b-7efe-4160-84a4-30471aadc1b7\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j26hr" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.634407 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567775-vxpk6" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.635953 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmbql\" (UniqueName: \"kubernetes.io/projected/8ae3d7d8-8466-4519-91c0-48c7230d1388-kube-api-access-hmbql\") pod \"csi-hostpathplugin-qgp9s\" (UID: \"8ae3d7d8-8466-4519-91c0-48c7230d1388\") " pod="hostpath-provisioner/csi-hostpathplugin-qgp9s" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.667741 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6dbvj" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.669031 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-dpq9j"] Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.698863 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:27 crc kubenswrapper[4923]: E0321 04:19:27.698955 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:28.198938283 +0000 UTC m=+133.351949370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699029 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5czs\" (UniqueName: \"kubernetes.io/projected/dfd46889-1472-4303-9b88-bd0b374fc3ac-kube-api-access-t5czs\") pod \"dns-default-9czdb\" (UID: \"dfd46889-1472-4303-9b88-bd0b374fc3ac\") " pod="openshift-dns/dns-default-9czdb" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699048 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/01bb7225-8917-44ec-894f-c2e237b1826e-default-certificate\") pod \"router-default-5444994796-tbxjk\" (UID: \"01bb7225-8917-44ec-894f-c2e237b1826e\") " pod="openshift-ingress/router-default-5444994796-tbxjk" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699080 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699101 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/6d274204-1cbf-4028-8cc7-ae94ad474006-ready\") pod \"cni-sysctl-allowlist-ds-l24r9\" (UID: \"6d274204-1cbf-4028-8cc7-ae94ad474006\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l24r9" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699126 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wgxn\" (UniqueName: \"kubernetes.io/projected/01bb7225-8917-44ec-894f-c2e237b1826e-kube-api-access-4wgxn\") pod \"router-default-5444994796-tbxjk\" (UID: \"01bb7225-8917-44ec-894f-c2e237b1826e\") " pod="openshift-ingress/router-default-5444994796-tbxjk" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699140 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e790bca6-2cf0-4c92-bba3-e3d609ea17a0-config\") pod \"service-ca-operator-777779d784-xw67l\" (UID: \"e790bca6-2cf0-4c92-bba3-e3d609ea17a0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xw67l" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699155 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e45e0807-99b9-42b2-b20c-f383d864fb2d-signing-key\") pod \"service-ca-9c57cc56f-4s8j8\" (UID: \"e45e0807-99b9-42b2-b20c-f383d864fb2d\") " pod="openshift-service-ca/service-ca-9c57cc56f-4s8j8" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699174 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64mxt\" (UniqueName: \"kubernetes.io/projected/927f7151-f378-4783-9da9-3c69886850d9-kube-api-access-64mxt\") pod \"machine-config-server-kfnbv\" (UID: \"927f7151-f378-4783-9da9-3c69886850d9\") " pod="openshift-machine-config-operator/machine-config-server-kfnbv" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699187 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dfd46889-1472-4303-9b88-bd0b374fc3ac-metrics-tls\") pod \"dns-default-9czdb\" (UID: \"dfd46889-1472-4303-9b88-bd0b374fc3ac\") " pod="openshift-dns/dns-default-9czdb" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699221 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/960489fc-897c-40a7-a0be-be20b3b81ff5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4jzdb\" (UID: \"960489fc-897c-40a7-a0be-be20b3b81ff5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4jzdb" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699237 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b5571552-4369-46f6-ad29-a54b1f4a7a8f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rwpdp\" (UID: \"b5571552-4369-46f6-ad29-a54b1f4a7a8f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rwpdp" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699265 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nhfp\" (UniqueName: \"kubernetes.io/projected/e45e0807-99b9-42b2-b20c-f383d864fb2d-kube-api-access-4nhfp\") pod \"service-ca-9c57cc56f-4s8j8\" (UID: \"e45e0807-99b9-42b2-b20c-f383d864fb2d\") " pod="openshift-service-ca/service-ca-9c57cc56f-4s8j8" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699279 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws7hv\" (UniqueName: \"kubernetes.io/projected/c71bf24a-b3c6-410d-9fc2-13ef571982c1-kube-api-access-ws7hv\") pod \"ingress-canary-k47v5\" (UID: \"c71bf24a-b3c6-410d-9fc2-13ef571982c1\") " pod="openshift-ingress-canary/ingress-canary-k47v5" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699330 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5f8d415-601f-44f6-a5be-50c0e5c23826-client-ca\") pod \"route-controller-manager-6576b87f9c-c67dl\" (UID: \"c5f8d415-601f-44f6-a5be-50c0e5c23826\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699346 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/01bb7225-8917-44ec-894f-c2e237b1826e-metrics-certs\") pod \"router-default-5444994796-tbxjk\" (UID: \"01bb7225-8917-44ec-894f-c2e237b1826e\") " pod="openshift-ingress/router-default-5444994796-tbxjk" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699361 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jhm5\" (UniqueName: \"kubernetes.io/projected/ef1a14b9-d6d5-4d4c-8666-173be56a1538-kube-api-access-8jhm5\") pod \"migrator-59844c95c7-bs4dw\" (UID: \"ef1a14b9-d6d5-4d4c-8666-173be56a1538\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bs4dw" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699383 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c71bf24a-b3c6-410d-9fc2-13ef571982c1-cert\") pod \"ingress-canary-k47v5\" (UID: \"c71bf24a-b3c6-410d-9fc2-13ef571982c1\") " pod="openshift-ingress-canary/ingress-canary-k47v5" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699398 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/01bb7225-8917-44ec-894f-c2e237b1826e-stats-auth\") pod \"router-default-5444994796-tbxjk\" (UID: \"01bb7225-8917-44ec-894f-c2e237b1826e\") " pod="openshift-ingress/router-default-5444994796-tbxjk" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699426 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f8d415-601f-44f6-a5be-50c0e5c23826-config\") pod \"route-controller-manager-6576b87f9c-c67dl\" (UID: \"c5f8d415-601f-44f6-a5be-50c0e5c23826\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699440 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6d274204-1cbf-4028-8cc7-ae94ad474006-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-l24r9\" (UID: \"6d274204-1cbf-4028-8cc7-ae94ad474006\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l24r9" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699461 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/927f7151-f378-4783-9da9-3c69886850d9-node-bootstrap-token\") pod \"machine-config-server-kfnbv\" (UID: \"927f7151-f378-4783-9da9-3c69886850d9\") " pod="openshift-machine-config-operator/machine-config-server-kfnbv" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699474 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/927f7151-f378-4783-9da9-3c69886850d9-certs\") pod \"machine-config-server-kfnbv\" (UID: \"927f7151-f378-4783-9da9-3c69886850d9\") " pod="openshift-machine-config-operator/machine-config-server-kfnbv" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699705 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xddqp\" (UniqueName: \"kubernetes.io/projected/960489fc-897c-40a7-a0be-be20b3b81ff5-kube-api-access-xddqp\") pod \"multus-admission-controller-857f4d67dd-4jzdb\" (UID: \"960489fc-897c-40a7-a0be-be20b3b81ff5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4jzdb" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699726 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfd46889-1472-4303-9b88-bd0b374fc3ac-config-volume\") pod \"dns-default-9czdb\" (UID: \"dfd46889-1472-4303-9b88-bd0b374fc3ac\") " pod="openshift-dns/dns-default-9czdb" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699742 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01bb7225-8917-44ec-894f-c2e237b1826e-service-ca-bundle\") pod \"router-default-5444994796-tbxjk\" (UID: \"01bb7225-8917-44ec-894f-c2e237b1826e\") " pod="openshift-ingress/router-default-5444994796-tbxjk" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699764 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvzvz\" (UniqueName: \"kubernetes.io/projected/61b9ce21-4328-454f-bb40-0c42c3815831-kube-api-access-zvzvz\") pod \"package-server-manager-789f6589d5-cf5k2\" (UID: \"61b9ce21-4328-454f-bb40-0c42c3815831\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cf5k2" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699787 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptn4d\" (UniqueName: \"kubernetes.io/projected/e790bca6-2cf0-4c92-bba3-e3d609ea17a0-kube-api-access-ptn4d\") pod \"service-ca-operator-777779d784-xw67l\" (UID: \"e790bca6-2cf0-4c92-bba3-e3d609ea17a0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xw67l" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699801 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6d274204-1cbf-4028-8cc7-ae94ad474006-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-l24r9\" (UID: \"6d274204-1cbf-4028-8cc7-ae94ad474006\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l24r9" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699815 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtth8\" (UniqueName: \"kubernetes.io/projected/b5571552-4369-46f6-ad29-a54b1f4a7a8f-kube-api-access-gtth8\") pod \"control-plane-machine-set-operator-78cbb6b69f-rwpdp\" (UID: \"b5571552-4369-46f6-ad29-a54b1f4a7a8f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rwpdp" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699832 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbx5p\" (UniqueName: \"kubernetes.io/projected/6d274204-1cbf-4028-8cc7-ae94ad474006-kube-api-access-wbx5p\") pod \"cni-sysctl-allowlist-ds-l24r9\" (UID: \"6d274204-1cbf-4028-8cc7-ae94ad474006\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l24r9" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699848 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt2p7\" (UniqueName: \"kubernetes.io/projected/c5f8d415-601f-44f6-a5be-50c0e5c23826-kube-api-access-jt2p7\") pod \"route-controller-manager-6576b87f9c-c67dl\" (UID: \"c5f8d415-601f-44f6-a5be-50c0e5c23826\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699863 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e45e0807-99b9-42b2-b20c-f383d864fb2d-signing-cabundle\") pod \"service-ca-9c57cc56f-4s8j8\" (UID: \"e45e0807-99b9-42b2-b20c-f383d864fb2d\") " pod="openshift-service-ca/service-ca-9c57cc56f-4s8j8" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699887 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/61b9ce21-4328-454f-bb40-0c42c3815831-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cf5k2\" (UID: \"61b9ce21-4328-454f-bb40-0c42c3815831\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cf5k2" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699903 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f8d415-601f-44f6-a5be-50c0e5c23826-serving-cert\") pod \"route-controller-manager-6576b87f9c-c67dl\" (UID: \"c5f8d415-601f-44f6-a5be-50c0e5c23826\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.699917 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e790bca6-2cf0-4c92-bba3-e3d609ea17a0-serving-cert\") pod \"service-ca-operator-777779d784-xw67l\" (UID: \"e790bca6-2cf0-4c92-bba3-e3d609ea17a0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xw67l" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.701113 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5jpr6"] Mar 21 04:19:27 crc kubenswrapper[4923]: E0321 04:19:27.701279 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:28.201267973 +0000 UTC m=+133.354279260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.710259 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qgp9s" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.802458 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:27 crc kubenswrapper[4923]: E0321 04:19:27.802607 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:28.302581914 +0000 UTC m=+133.455593001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.802659 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptn4d\" (UniqueName: \"kubernetes.io/projected/e790bca6-2cf0-4c92-bba3-e3d609ea17a0-kube-api-access-ptn4d\") pod \"service-ca-operator-777779d784-xw67l\" (UID: \"e790bca6-2cf0-4c92-bba3-e3d609ea17a0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xw67l" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.802686 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6d274204-1cbf-4028-8cc7-ae94ad474006-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-l24r9\" (UID: \"6d274204-1cbf-4028-8cc7-ae94ad474006\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l24r9" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.802703 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtth8\" (UniqueName: \"kubernetes.io/projected/b5571552-4369-46f6-ad29-a54b1f4a7a8f-kube-api-access-gtth8\") pod \"control-plane-machine-set-operator-78cbb6b69f-rwpdp\" (UID: \"b5571552-4369-46f6-ad29-a54b1f4a7a8f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rwpdp" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.802755 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbx5p\" (UniqueName: \"kubernetes.io/projected/6d274204-1cbf-4028-8cc7-ae94ad474006-kube-api-access-wbx5p\") pod \"cni-sysctl-allowlist-ds-l24r9\" (UID: \"6d274204-1cbf-4028-8cc7-ae94ad474006\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l24r9" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.802800 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt2p7\" (UniqueName: \"kubernetes.io/projected/c5f8d415-601f-44f6-a5be-50c0e5c23826-kube-api-access-jt2p7\") pod \"route-controller-manager-6576b87f9c-c67dl\" (UID: \"c5f8d415-601f-44f6-a5be-50c0e5c23826\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.802831 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e45e0807-99b9-42b2-b20c-f383d864fb2d-signing-cabundle\") pod \"service-ca-9c57cc56f-4s8j8\" (UID: \"e45e0807-99b9-42b2-b20c-f383d864fb2d\") " pod="openshift-service-ca/service-ca-9c57cc56f-4s8j8" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.803165 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/61b9ce21-4328-454f-bb40-0c42c3815831-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cf5k2\" (UID: \"61b9ce21-4328-454f-bb40-0c42c3815831\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cf5k2" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.803200 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f8d415-601f-44f6-a5be-50c0e5c23826-serving-cert\") pod \"route-controller-manager-6576b87f9c-c67dl\" (UID: \"c5f8d415-601f-44f6-a5be-50c0e5c23826\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.803236 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e790bca6-2cf0-4c92-bba3-e3d609ea17a0-serving-cert\") pod \"service-ca-operator-777779d784-xw67l\" (UID: \"e790bca6-2cf0-4c92-bba3-e3d609ea17a0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xw67l" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.803400 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5czs\" (UniqueName: \"kubernetes.io/projected/dfd46889-1472-4303-9b88-bd0b374fc3ac-kube-api-access-t5czs\") pod \"dns-default-9czdb\" (UID: \"dfd46889-1472-4303-9b88-bd0b374fc3ac\") " pod="openshift-dns/dns-default-9czdb" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.803472 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/01bb7225-8917-44ec-894f-c2e237b1826e-default-certificate\") pod \"router-default-5444994796-tbxjk\" (UID: \"01bb7225-8917-44ec-894f-c2e237b1826e\") " pod="openshift-ingress/router-default-5444994796-tbxjk" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.803538 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.803612 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/6d274204-1cbf-4028-8cc7-ae94ad474006-ready\") pod \"cni-sysctl-allowlist-ds-l24r9\" (UID: \"6d274204-1cbf-4028-8cc7-ae94ad474006\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l24r9" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.803633 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wgxn\" (UniqueName: \"kubernetes.io/projected/01bb7225-8917-44ec-894f-c2e237b1826e-kube-api-access-4wgxn\") pod \"router-default-5444994796-tbxjk\" (UID: \"01bb7225-8917-44ec-894f-c2e237b1826e\") " pod="openshift-ingress/router-default-5444994796-tbxjk" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.803701 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e790bca6-2cf0-4c92-bba3-e3d609ea17a0-config\") pod \"service-ca-operator-777779d784-xw67l\" (UID: \"e790bca6-2cf0-4c92-bba3-e3d609ea17a0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xw67l" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.803719 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e45e0807-99b9-42b2-b20c-f383d864fb2d-signing-key\") pod \"service-ca-9c57cc56f-4s8j8\" (UID: \"e45e0807-99b9-42b2-b20c-f383d864fb2d\") " pod="openshift-service-ca/service-ca-9c57cc56f-4s8j8" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.803769 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dfd46889-1472-4303-9b88-bd0b374fc3ac-metrics-tls\") pod \"dns-default-9czdb\" (UID: \"dfd46889-1472-4303-9b88-bd0b374fc3ac\") " pod="openshift-dns/dns-default-9czdb" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.803840 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64mxt\" (UniqueName: \"kubernetes.io/projected/927f7151-f378-4783-9da9-3c69886850d9-kube-api-access-64mxt\") pod \"machine-config-server-kfnbv\" (UID: \"927f7151-f378-4783-9da9-3c69886850d9\") " pod="openshift-machine-config-operator/machine-config-server-kfnbv" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.803867 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b5571552-4369-46f6-ad29-a54b1f4a7a8f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rwpdp\" (UID: \"b5571552-4369-46f6-ad29-a54b1f4a7a8f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rwpdp" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.803897 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/960489fc-897c-40a7-a0be-be20b3b81ff5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4jzdb\" (UID: \"960489fc-897c-40a7-a0be-be20b3b81ff5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4jzdb" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.803961 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nhfp\" (UniqueName: \"kubernetes.io/projected/e45e0807-99b9-42b2-b20c-f383d864fb2d-kube-api-access-4nhfp\") pod \"service-ca-9c57cc56f-4s8j8\" (UID: \"e45e0807-99b9-42b2-b20c-f383d864fb2d\") " pod="openshift-service-ca/service-ca-9c57cc56f-4s8j8" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.803985 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws7hv\" (UniqueName: \"kubernetes.io/projected/c71bf24a-b3c6-410d-9fc2-13ef571982c1-kube-api-access-ws7hv\") pod \"ingress-canary-k47v5\" (UID: \"c71bf24a-b3c6-410d-9fc2-13ef571982c1\") " pod="openshift-ingress-canary/ingress-canary-k47v5" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.804099 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e45e0807-99b9-42b2-b20c-f383d864fb2d-signing-cabundle\") pod \"service-ca-9c57cc56f-4s8j8\" (UID: \"e45e0807-99b9-42b2-b20c-f383d864fb2d\") " pod="openshift-service-ca/service-ca-9c57cc56f-4s8j8" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.804576 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5f8d415-601f-44f6-a5be-50c0e5c23826-client-ca\") pod \"route-controller-manager-6576b87f9c-c67dl\" (UID: \"c5f8d415-601f-44f6-a5be-50c0e5c23826\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.804597 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/01bb7225-8917-44ec-894f-c2e237b1826e-metrics-certs\") pod \"router-default-5444994796-tbxjk\" (UID: \"01bb7225-8917-44ec-894f-c2e237b1826e\") " pod="openshift-ingress/router-default-5444994796-tbxjk" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.804727 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jhm5\" (UniqueName: \"kubernetes.io/projected/ef1a14b9-d6d5-4d4c-8666-173be56a1538-kube-api-access-8jhm5\") pod \"migrator-59844c95c7-bs4dw\" (UID: \"ef1a14b9-d6d5-4d4c-8666-173be56a1538\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bs4dw" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.804748 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c71bf24a-b3c6-410d-9fc2-13ef571982c1-cert\") pod \"ingress-canary-k47v5\" (UID: \"c71bf24a-b3c6-410d-9fc2-13ef571982c1\") " pod="openshift-ingress-canary/ingress-canary-k47v5" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.804774 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/01bb7225-8917-44ec-894f-c2e237b1826e-stats-auth\") pod \"router-default-5444994796-tbxjk\" (UID: \"01bb7225-8917-44ec-894f-c2e237b1826e\") " pod="openshift-ingress/router-default-5444994796-tbxjk" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.804836 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f8d415-601f-44f6-a5be-50c0e5c23826-config\") pod \"route-controller-manager-6576b87f9c-c67dl\" (UID: \"c5f8d415-601f-44f6-a5be-50c0e5c23826\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.804855 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6d274204-1cbf-4028-8cc7-ae94ad474006-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-l24r9\" (UID: \"6d274204-1cbf-4028-8cc7-ae94ad474006\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l24r9" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.804908 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/927f7151-f378-4783-9da9-3c69886850d9-node-bootstrap-token\") pod \"machine-config-server-kfnbv\" (UID: \"927f7151-f378-4783-9da9-3c69886850d9\") " pod="openshift-machine-config-operator/machine-config-server-kfnbv" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.804927 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/927f7151-f378-4783-9da9-3c69886850d9-certs\") pod \"machine-config-server-kfnbv\" (UID: \"927f7151-f378-4783-9da9-3c69886850d9\") " pod="openshift-machine-config-operator/machine-config-server-kfnbv" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.809155 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e790bca6-2cf0-4c92-bba3-e3d609ea17a0-config\") pod \"service-ca-operator-777779d784-xw67l\" (UID: \"e790bca6-2cf0-4c92-bba3-e3d609ea17a0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xw67l" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.809389 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/6d274204-1cbf-4028-8cc7-ae94ad474006-ready\") pod \"cni-sysctl-allowlist-ds-l24r9\" (UID: \"6d274204-1cbf-4028-8cc7-ae94ad474006\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l24r9" Mar 21 04:19:27 crc kubenswrapper[4923]: E0321 04:19:27.809921 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:28.309898413 +0000 UTC m=+133.462909500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.810089 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6d274204-1cbf-4028-8cc7-ae94ad474006-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-l24r9\" (UID: \"6d274204-1cbf-4028-8cc7-ae94ad474006\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l24r9" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.810799 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xddqp\" (UniqueName: \"kubernetes.io/projected/960489fc-897c-40a7-a0be-be20b3b81ff5-kube-api-access-xddqp\") pod \"multus-admission-controller-857f4d67dd-4jzdb\" (UID: \"960489fc-897c-40a7-a0be-be20b3b81ff5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4jzdb" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.810899 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfd46889-1472-4303-9b88-bd0b374fc3ac-config-volume\") pod \"dns-default-9czdb\" (UID: \"dfd46889-1472-4303-9b88-bd0b374fc3ac\") " pod="openshift-dns/dns-default-9czdb" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.810970 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01bb7225-8917-44ec-894f-c2e237b1826e-service-ca-bundle\") pod \"router-default-5444994796-tbxjk\" (UID: \"01bb7225-8917-44ec-894f-c2e237b1826e\") " pod="openshift-ingress/router-default-5444994796-tbxjk" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.820829 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvzvz\" (UniqueName: \"kubernetes.io/projected/61b9ce21-4328-454f-bb40-0c42c3815831-kube-api-access-zvzvz\") pod \"package-server-manager-789f6589d5-cf5k2\" (UID: \"61b9ce21-4328-454f-bb40-0c42c3815831\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cf5k2" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.821944 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dfd46889-1472-4303-9b88-bd0b374fc3ac-metrics-tls\") pod \"dns-default-9czdb\" (UID: \"dfd46889-1472-4303-9b88-bd0b374fc3ac\") " pod="openshift-dns/dns-default-9czdb" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.822064 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/927f7151-f378-4783-9da9-3c69886850d9-certs\") pod \"machine-config-server-kfnbv\" (UID: \"927f7151-f378-4783-9da9-3c69886850d9\") " pod="openshift-machine-config-operator/machine-config-server-kfnbv" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.822956 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f8d415-601f-44f6-a5be-50c0e5c23826-serving-cert\") pod \"route-controller-manager-6576b87f9c-c67dl\" (UID: \"c5f8d415-601f-44f6-a5be-50c0e5c23826\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.824201 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5f8d415-601f-44f6-a5be-50c0e5c23826-client-ca\") pod \"route-controller-manager-6576b87f9c-c67dl\" (UID: \"c5f8d415-601f-44f6-a5be-50c0e5c23826\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.824915 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6d274204-1cbf-4028-8cc7-ae94ad474006-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-l24r9\" (UID: \"6d274204-1cbf-4028-8cc7-ae94ad474006\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l24r9" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.826233 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfd46889-1472-4303-9b88-bd0b374fc3ac-config-volume\") pod \"dns-default-9czdb\" (UID: \"dfd46889-1472-4303-9b88-bd0b374fc3ac\") " pod="openshift-dns/dns-default-9czdb" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.828066 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/61b9ce21-4328-454f-bb40-0c42c3815831-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cf5k2\" (UID: \"61b9ce21-4328-454f-bb40-0c42c3815831\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cf5k2" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.828916 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e790bca6-2cf0-4c92-bba3-e3d609ea17a0-serving-cert\") pod \"service-ca-operator-777779d784-xw67l\" (UID: \"e790bca6-2cf0-4c92-bba3-e3d609ea17a0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xw67l" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.829083 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f8d415-601f-44f6-a5be-50c0e5c23826-config\") pod \"route-controller-manager-6576b87f9c-c67dl\" (UID: \"c5f8d415-601f-44f6-a5be-50c0e5c23826\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.829272 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gdjq" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.829287 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b5571552-4369-46f6-ad29-a54b1f4a7a8f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rwpdp\" (UID: \"b5571552-4369-46f6-ad29-a54b1f4a7a8f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rwpdp" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.829811 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-94ccs"] Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.832131 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01bb7225-8917-44ec-894f-c2e237b1826e-service-ca-bundle\") pod \"router-default-5444994796-tbxjk\" (UID: \"01bb7225-8917-44ec-894f-c2e237b1826e\") " pod="openshift-ingress/router-default-5444994796-tbxjk" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.833930 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mnss5"] Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.834519 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/01bb7225-8917-44ec-894f-c2e237b1826e-default-certificate\") pod \"router-default-5444994796-tbxjk\" (UID: \"01bb7225-8917-44ec-894f-c2e237b1826e\") " pod="openshift-ingress/router-default-5444994796-tbxjk" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.838007 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxrf" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.841377 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/01bb7225-8917-44ec-894f-c2e237b1826e-stats-auth\") pod \"router-default-5444994796-tbxjk\" (UID: \"01bb7225-8917-44ec-894f-c2e237b1826e\") " pod="openshift-ingress/router-default-5444994796-tbxjk" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.841701 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/960489fc-897c-40a7-a0be-be20b3b81ff5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4jzdb\" (UID: \"960489fc-897c-40a7-a0be-be20b3b81ff5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4jzdb" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.843034 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-q2rkk"] Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.844885 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c71bf24a-b3c6-410d-9fc2-13ef571982c1-cert\") pod \"ingress-canary-k47v5\" (UID: \"c71bf24a-b3c6-410d-9fc2-13ef571982c1\") " pod="openshift-ingress-canary/ingress-canary-k47v5" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.845248 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e45e0807-99b9-42b2-b20c-f383d864fb2d-signing-key\") pod \"service-ca-9c57cc56f-4s8j8\" (UID: \"e45e0807-99b9-42b2-b20c-f383d864fb2d\") " pod="openshift-service-ca/service-ca-9c57cc56f-4s8j8" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.848944 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/01bb7225-8917-44ec-894f-c2e237b1826e-metrics-certs\") pod \"router-default-5444994796-tbxjk\" (UID: \"01bb7225-8917-44ec-894f-c2e237b1826e\") " pod="openshift-ingress/router-default-5444994796-tbxjk" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.848961 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/927f7151-f378-4783-9da9-3c69886850d9-node-bootstrap-token\") pod \"machine-config-server-kfnbv\" (UID: \"927f7151-f378-4783-9da9-3c69886850d9\") " pod="openshift-machine-config-operator/machine-config-server-kfnbv" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.854060 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptn4d\" (UniqueName: \"kubernetes.io/projected/e790bca6-2cf0-4c92-bba3-e3d609ea17a0-kube-api-access-ptn4d\") pod \"service-ca-operator-777779d784-xw67l\" (UID: \"e790bca6-2cf0-4c92-bba3-e3d609ea17a0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xw67l" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.874739 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2hqpc"] Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.875479 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtth8\" (UniqueName: \"kubernetes.io/projected/b5571552-4369-46f6-ad29-a54b1f4a7a8f-kube-api-access-gtth8\") pod \"control-plane-machine-set-operator-78cbb6b69f-rwpdp\" (UID: \"b5571552-4369-46f6-ad29-a54b1f4a7a8f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rwpdp" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.890456 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-zljcw" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.896965 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j26hr" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.899899 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt2p7\" (UniqueName: \"kubernetes.io/projected/c5f8d415-601f-44f6-a5be-50c0e5c23826-kube-api-access-jt2p7\") pod \"route-controller-manager-6576b87f9c-c67dl\" (UID: \"c5f8d415-601f-44f6-a5be-50c0e5c23826\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.916002 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbx5p\" (UniqueName: \"kubernetes.io/projected/6d274204-1cbf-4028-8cc7-ae94ad474006-kube-api-access-wbx5p\") pod \"cni-sysctl-allowlist-ds-l24r9\" (UID: \"6d274204-1cbf-4028-8cc7-ae94ad474006\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l24r9" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.924403 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:27 crc kubenswrapper[4923]: E0321 04:19:27.924927 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:28.424902344 +0000 UTC m=+133.577913431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.934180 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5czs\" (UniqueName: \"kubernetes.io/projected/dfd46889-1472-4303-9b88-bd0b374fc3ac-kube-api-access-t5czs\") pod \"dns-default-9czdb\" (UID: \"dfd46889-1472-4303-9b88-bd0b374fc3ac\") " pod="openshift-dns/dns-default-9czdb" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.959150 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws7hv\" (UniqueName: \"kubernetes.io/projected/c71bf24a-b3c6-410d-9fc2-13ef571982c1-kube-api-access-ws7hv\") pod \"ingress-canary-k47v5\" (UID: \"c71bf24a-b3c6-410d-9fc2-13ef571982c1\") " pod="openshift-ingress-canary/ingress-canary-k47v5" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.960192 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xw67l" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.979429 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nhfp\" (UniqueName: \"kubernetes.io/projected/e45e0807-99b9-42b2-b20c-f383d864fb2d-kube-api-access-4nhfp\") pod \"service-ca-9c57cc56f-4s8j8\" (UID: \"e45e0807-99b9-42b2-b20c-f383d864fb2d\") " pod="openshift-service-ca/service-ca-9c57cc56f-4s8j8" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.987057 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl" Mar 21 04:19:27 crc kubenswrapper[4923]: I0321 04:19:27.999590 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64mxt\" (UniqueName: \"kubernetes.io/projected/927f7151-f378-4783-9da9-3c69886850d9-kube-api-access-64mxt\") pod \"machine-config-server-kfnbv\" (UID: \"927f7151-f378-4783-9da9-3c69886850d9\") " pod="openshift-machine-config-operator/machine-config-server-kfnbv" Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.008372 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qb6fp" event={"ID":"20a37f55-0799-4e21-88d8-f50c736f01ef","Type":"ContainerStarted","Data":"15e959741661b4e40399ddd38ef71ee428710364fe3cf21fc505d730b093703f"} Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.014712 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wfhqh" event={"ID":"5944eec5-8a3f-4fde-86d1-792b48c0a2bd","Type":"ContainerStarted","Data":"b80f16a2e167b891f04514c6a5f9078a921677252f6d7a0e3ddade075dd73255"} Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.017460 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9czdb" Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.025304 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-l24r9" Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.026407 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:28 crc kubenswrapper[4923]: E0321 04:19:28.026908 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:28.526892895 +0000 UTC m=+133.679903982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.027085 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vwt2s" event={"ID":"7d3be6cc-22e0-4a96-9fc2-aed5fe092a53","Type":"ContainerStarted","Data":"539dc637fb1a1ebe82ed3b0c0d96e90058ff912e433bd6dde3823845c3b18f9e"} Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.027399 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-k47v5" Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.027652 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wgxn\" (UniqueName: \"kubernetes.io/projected/01bb7225-8917-44ec-894f-c2e237b1826e-kube-api-access-4wgxn\") pod \"router-default-5444994796-tbxjk\" (UID: \"01bb7225-8917-44ec-894f-c2e237b1826e\") " pod="openshift-ingress/router-default-5444994796-tbxjk" Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.031187 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qm9sh" event={"ID":"8372e439-9ab9-4ba3-80e8-c6d3959187f7","Type":"ContainerStarted","Data":"a3004f9faf77c8bf69ec4c227fe110f860bf6a38b915acf7e5b387174f303b6f"} Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.035382 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dpq9j" event={"ID":"1877b8e8-87db-477a-a619-be2fbdc97d89","Type":"ContainerStarted","Data":"77b110531d049a134d92a0ded8e6d3a3fb509cf87fcd9588ccbbe0b6d54aab65"} Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.036424 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" event={"ID":"bd68bc5e-ac31-4f87-8e40-d7e4d7696106","Type":"ContainerStarted","Data":"62ebbcde0f2eaec792c13e71e970fb45f38ac07ed3b3e37ff02ae52c4f9d1ecc"} Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.039117 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xddqp\" (UniqueName: \"kubernetes.io/projected/960489fc-897c-40a7-a0be-be20b3b81ff5-kube-api-access-xddqp\") pod \"multus-admission-controller-857f4d67dd-4jzdb\" (UID: \"960489fc-897c-40a7-a0be-be20b3b81ff5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4jzdb" Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.039563 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-94ccs" event={"ID":"8fbfb476-3201-4f1b-b6a1-9b2b858d37d4","Type":"ContainerStarted","Data":"5cbfddb648879c0a133e61e3e3fd7a4427702b4e08b46b6a41b3bfca07f305c6"} Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.040731 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" event={"ID":"ea29f2c8-3e4b-42b5-8161-af03ae16ed23","Type":"ContainerStarted","Data":"ce2388c7bc18b96ccd27aa7948f6f46badfac812e420fbcbc925b4db56ea5986"} Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.043973 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2hqpc" event={"ID":"3f1486ed-5a65-43a1-9a45-c02318d4d831","Type":"ContainerStarted","Data":"1404b41367360ece9edcc283a0d9093fb91a95964b03627c326b54442442f787"} Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.045511 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mnss5" event={"ID":"1921652d-7f9e-4bef-927f-fd616e41f865","Type":"ContainerStarted","Data":"3639b39b5a650ecd17872e67f8104a812fbbce291e1996c2c8f483da02fd13a4"} Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.046717 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" event={"ID":"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d","Type":"ContainerStarted","Data":"d436a9a720d77342e5cf55028ec063d96119d24b46ccd3ce38d736c2e8048a64"} Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.047811 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q2rkk" event={"ID":"57f89f97-10d8-45d1-a69f-34d09c5224c9","Type":"ContainerStarted","Data":"d252813ec9edd6866eda61f8bc49c440e6a20052bab9f524da4c48dd6926cc83"} Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.049291 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kkp72" event={"ID":"9e29bd24-b2c3-434d-bdf8-6f05376fb87a","Type":"ContainerStarted","Data":"085ec835217dca00415ee320dac1bab7b47afbc72cdaaedde2fd23a20d2da9dc"} Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.050470 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-22lp8" event={"ID":"baaa32c9-702b-4a43-a7b7-7a98272f80f3","Type":"ContainerStarted","Data":"f7b3d538719c133c3ea4af14c756729d2045197838c2fb4610f4d958b9c43108"} Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.050495 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-22lp8" event={"ID":"baaa32c9-702b-4a43-a7b7-7a98272f80f3","Type":"ContainerStarted","Data":"ab3f19a0c224c1b919d4a413583466067b95418d56e91f92e5e7be98962b4848"} Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.054417 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jhm5\" (UniqueName: \"kubernetes.io/projected/ef1a14b9-d6d5-4d4c-8666-173be56a1538-kube-api-access-8jhm5\") pod \"migrator-59844c95c7-bs4dw\" (UID: \"ef1a14b9-d6d5-4d4c-8666-173be56a1538\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bs4dw" Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.067394 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-tbxjk" Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.077260 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvzvz\" (UniqueName: \"kubernetes.io/projected/61b9ce21-4328-454f-bb40-0c42c3815831-kube-api-access-zvzvz\") pod \"package-server-manager-789f6589d5-cf5k2\" (UID: \"61b9ce21-4328-454f-bb40-0c42c3815831\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cf5k2" Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.082910 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lqw2"] Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.083155 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bs4dw" Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.093522 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kvhtt"] Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.110436 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-4jzdb" Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.128230 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:28 crc kubenswrapper[4923]: E0321 04:19:28.131572 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:28.631556857 +0000 UTC m=+133.784567944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.140018 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rwpdp" Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.212052 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8bxdt"] Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.217588 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4s8j8" Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.230838 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-p69c7"] Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.232272 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-68fmb"] Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.232371 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:28 crc kubenswrapper[4923]: E0321 04:19:28.232632 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:28.732620641 +0000 UTC m=+133.885631728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:28 crc kubenswrapper[4923]: W0321 04:19:28.237893 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaffcaf5_95c2_451c_af81_3472b875d910.slice/crio-96f5604a0f16bbcd9170fca5d26f1d1ea912d65ddb25a5d917440e67ac431bf7 WatchSource:0}: Error finding container 96f5604a0f16bbcd9170fca5d26f1d1ea912d65ddb25a5d917440e67ac431bf7: Status 404 returned error can't find the container with id 96f5604a0f16bbcd9170fca5d26f1d1ea912d65ddb25a5d917440e67ac431bf7 Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.259766 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-s5spg"] Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.278264 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567775-vxpk6"] Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.278405 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cf5k2" Mar 21 04:19:28 crc kubenswrapper[4923]: W0321 04:19:28.287016 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf54c874_99d3_4bff_bef0_992bcee74002.slice/crio-44378ecd17a9a5cee0b430b48372a5df54972b88ef1edbafe625a98d98797eab WatchSource:0}: Error finding container 44378ecd17a9a5cee0b430b48372a5df54972b88ef1edbafe625a98d98797eab: Status 404 returned error can't find the container with id 44378ecd17a9a5cee0b430b48372a5df54972b88ef1edbafe625a98d98797eab Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.292428 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv8h5"] Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.293993 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kfnbv" Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.332770 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:28 crc kubenswrapper[4923]: E0321 04:19:28.333456 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:28.833437687 +0000 UTC m=+133.986448774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.333585 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:28 crc kubenswrapper[4923]: E0321 04:19:28.333849 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:28.833840889 +0000 UTC m=+133.986851976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.396694 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qgp9s"] Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.424618 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k2q79"] Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.435711 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:28 crc kubenswrapper[4923]: E0321 04:19:28.436135 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:28.936117049 +0000 UTC m=+134.089128136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.460762 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zljcw"] Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.496851 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gdjq"] Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.537265 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:28 crc kubenswrapper[4923]: E0321 04:19:28.537589 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:29.037577255 +0000 UTC m=+134.190588342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.606625 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6dbvj"] Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.637843 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:28 crc kubenswrapper[4923]: E0321 04:19:28.637950 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:29.137927957 +0000 UTC m=+134.290939044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.638064 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:28 crc kubenswrapper[4923]: E0321 04:19:28.638444 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:29.138432462 +0000 UTC m=+134.291443549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.730094 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxrf"] Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.739575 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:28 crc kubenswrapper[4923]: E0321 04:19:28.739877 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:29.239863677 +0000 UTC m=+134.392874764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.758144 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-blnd8"] Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.841561 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:28 crc kubenswrapper[4923]: E0321 04:19:28.841931 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:29.34191114 +0000 UTC m=+134.494922297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.943209 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:28 crc kubenswrapper[4923]: E0321 04:19:28.945543 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:29.44552711 +0000 UTC m=+134.598538187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.979593 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9czdb"] Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.982775 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-k47v5"] Mar 21 04:19:28 crc kubenswrapper[4923]: I0321 04:19:28.988193 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j26hr"] Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.010455 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xw67l"] Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.046068 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:29 crc kubenswrapper[4923]: E0321 04:19:29.046463 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:29.54644954 +0000 UTC m=+134.699460617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.088737 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bs4dw"] Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.122229 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" event={"ID":"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d","Type":"ContainerStarted","Data":"62197133d500745148f5c95f6941c62c92b9d16ad3db98763e0f8fdfefde6de5"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.123241 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.138481 4923 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-5jpr6 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.138545 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" podUID="9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.154436 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:29 crc kubenswrapper[4923]: E0321 04:19:29.154910 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:29.654884034 +0000 UTC m=+134.807895121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:29 crc kubenswrapper[4923]: W0321 04:19:29.181400 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef1a14b9_d6d5_4d4c_8666_173be56a1538.slice/crio-238e9b62bd55667c1de057f2ee3aedb1f504f659dbe88064026bf4ff75043def WatchSource:0}: Error finding container 238e9b62bd55667c1de057f2ee3aedb1f504f659dbe88064026bf4ff75043def: Status 404 returned error can't find the container with id 238e9b62bd55667c1de057f2ee3aedb1f504f659dbe88064026bf4ff75043def Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.209117 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q2rkk" event={"ID":"57f89f97-10d8-45d1-a69f-34d09c5224c9","Type":"ContainerStarted","Data":"27d55e2832515645f022bc67b83e28c32962e59834ea4e8e6b069189cd1e3957"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.212054 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4s8j8"] Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.217093 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-22lp8" event={"ID":"baaa32c9-702b-4a43-a7b7-7a98272f80f3","Type":"ContainerStarted","Data":"4c82377cb8b9609969c53cd3870b7a28b7f108e527636e95351b685852acf0a5"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.219264 4923 generic.go:334] "Generic (PLEG): container finished" podID="ea29f2c8-3e4b-42b5-8161-af03ae16ed23" containerID="630e30dde6e99568f5ce8e545e8c93cd27ae11b84f064dde45b658cc8372bfa9" exitCode=0 Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.219305 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" event={"ID":"ea29f2c8-3e4b-42b5-8161-af03ae16ed23","Type":"ContainerDied","Data":"630e30dde6e99568f5ce8e545e8c93cd27ae11b84f064dde45b658cc8372bfa9"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.229804 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-p69c7" event={"ID":"bba19ac5-eeb6-4536-93c2-22f110e6ce8a","Type":"ContainerStarted","Data":"13a18adbc70e88d4272a6d3c48183569c57390f6373dee922da1ec352c60dee7"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.232858 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl"] Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.232885 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cf5k2"] Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.238010 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kkp72" event={"ID":"9e29bd24-b2c3-434d-bdf8-6f05376fb87a","Type":"ContainerStarted","Data":"6f0883f40c4ceab69de811c61f05997e6f016dc354d39035be6ac6358fec0ba9"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.242219 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wfhqh" event={"ID":"5944eec5-8a3f-4fde-86d1-792b48c0a2bd","Type":"ContainerStarted","Data":"548cf71189b91fce37f529104bd445e5e9413592c84ac65e3f11339f9b57a073"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.242249 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wfhqh" event={"ID":"5944eec5-8a3f-4fde-86d1-792b48c0a2bd","Type":"ContainerStarted","Data":"164d819b271477670754cc630b7961a8d79e21f9e573a293bd2e83cb36329c16"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.246257 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vwt2s" event={"ID":"7d3be6cc-22e0-4a96-9fc2-aed5fe092a53","Type":"ContainerStarted","Data":"2c9a3967bbc4f3f29dcab9b6fb3f1cae8e338cbcb4730b2c92f83a0c4f210ae2"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.246750 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-vwt2s" Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.248886 4923 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-vwt2s container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.248936 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-vwt2s" podUID="7d3be6cc-22e0-4a96-9fc2-aed5fe092a53" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.251390 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-zljcw" event={"ID":"dc511693-2a38-4ca9-bf24-c7f8b7c47972","Type":"ContainerStarted","Data":"76f22ffae06d75ea6be5e032a596f71afdfe88b48509355f98b43e18f1c36173"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.252125 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-blnd8" event={"ID":"6df34760-baa5-45b8-859e-c9935c6f5656","Type":"ContainerStarted","Data":"a246ef30dddf2514ac77115324241bda7fe62febc6e35b7dc5b908a8a77a8192"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.252993 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gdjq" event={"ID":"e2c285ca-61ae-40f2-b349-7028faa03a00","Type":"ContainerStarted","Data":"0198c3ed429445535445824148cc7d69c87e10a0ee99aa20e4f82d2de10e07ac"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.253805 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kfnbv" event={"ID":"927f7151-f378-4783-9da9-3c69886850d9","Type":"ContainerStarted","Data":"3fb08e0e60cf2bbda6150a7e25efddb638cd1fa9874c76d43f16ec4ce0705251"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.255499 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.256841 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8bxdt" event={"ID":"39dc2e68-1df7-426f-aa50-15a542f6995b","Type":"ContainerStarted","Data":"a51aafffe547fe0a825780c83bc23eaa3baf3f01a18005ef2c6918b7e3786890"} Mar 21 04:19:29 crc kubenswrapper[4923]: E0321 04:19:29.257264 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:29.757250787 +0000 UTC m=+134.910261874 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.258786 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lqw2" event={"ID":"93ea7292-1b26-4dc3-a5ba-d9b799c22264","Type":"ContainerStarted","Data":"8e1bf11324331ec5f9d2338af0121e3a6f763bdc3940376982702856ac85c35e"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.261024 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qgp9s" event={"ID":"8ae3d7d8-8466-4519-91c0-48c7230d1388","Type":"ContainerStarted","Data":"3a44326954c716f3680992e4713319d392f4df3e5d0a7b41c89d6f5bfdb8ccd8"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.261770 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567775-vxpk6" event={"ID":"b717fc18-bfdb-4e99-8fc0-ea7c905dd908","Type":"ContainerStarted","Data":"1528bc9dde3561114603d744e61b80829bdc2500d14be2620ed4d63e90525b91"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.264282 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-tbxjk" event={"ID":"01bb7225-8917-44ec-894f-c2e237b1826e","Type":"ContainerStarted","Data":"d6658cc5513ef38daba5446e7c60a1f3a32c788b7558c9c7b10a4daed0e5e176"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.269954 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qm9sh" event={"ID":"8372e439-9ab9-4ba3-80e8-c6d3959187f7","Type":"ContainerStarted","Data":"18f75b307a4176dfdff944337c84ebb5bd6fd6a2ce7b0c972a5135561b49f88a"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.271179 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k2q79" event={"ID":"b56d396c-a7a2-466c-8c2a-973f24a9e3f1","Type":"ContainerStarted","Data":"15e2fc492ceac721e05f3afdc054686f724d891fb9308b69a5ec5e8db7e7e1bd"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.272994 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2hqpc" event={"ID":"3f1486ed-5a65-43a1-9a45-c02318d4d831","Type":"ContainerStarted","Data":"c3c6cdb2be4a633ee6540d2ff042ad715b57a55783a639c06ce38adb49197db8"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.273360 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-2hqpc" Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.274506 4923 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hqpc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.274556 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hqpc" podUID="3f1486ed-5a65-43a1-9a45-c02318d4d831" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.279671 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mnss5" event={"ID":"1921652d-7f9e-4bef-927f-fd616e41f865","Type":"ContainerStarted","Data":"fdc1109737d7c7556522c838a5c2c4e6049d807cf2c1545c42d7bb7cf3bfae3d"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.284463 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-l24r9" event={"ID":"6d274204-1cbf-4028-8cc7-ae94ad474006","Type":"ContainerStarted","Data":"7fbf7c6d5c3057bc98f21b2167ed422e96ba8922452a50c2bd14db423cd134b6"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.286966 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dpq9j" event={"ID":"1877b8e8-87db-477a-a619-be2fbdc97d89","Type":"ContainerStarted","Data":"2508f09130d60d1d2f245afbf70ec4052e4fce37673ad374bc0ed45c350a9a84"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.288807 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-94ccs" event={"ID":"8fbfb476-3201-4f1b-b6a1-9b2b858d37d4","Type":"ContainerStarted","Data":"17e2b054adb43bda421ef9e324c66b35afc77296232f67a1d0a11a3852f6b607"} Mar 21 04:19:29 crc kubenswrapper[4923]: W0321 04:19:29.293514 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61b9ce21_4328_454f_bb40_0c42c3815831.slice/crio-0f45d136c9246a659b39ac684d6353acd268d46d16a5765d862b8af4004851d2 WatchSource:0}: Error finding container 0f45d136c9246a659b39ac684d6353acd268d46d16a5765d862b8af4004851d2: Status 404 returned error can't find the container with id 0f45d136c9246a659b39ac684d6353acd268d46d16a5765d862b8af4004851d2 Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.297300 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qb6fp" event={"ID":"20a37f55-0799-4e21-88d8-f50c736f01ef","Type":"ContainerStarted","Data":"d95a4d6739c4c4fbafc57b3d3afbf6af21fa51339033088c7b2820f89ab1dbf0"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.298544 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv8h5" event={"ID":"972f1013-d1cb-44f2-b79d-baacfef4e939","Type":"ContainerStarted","Data":"60d04e12c95cfd06675ee40dc829a5e4909e10541566fd6c55de0072050d7c10"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.300692 4923 generic.go:334] "Generic (PLEG): container finished" podID="bd68bc5e-ac31-4f87-8e40-d7e4d7696106" containerID="d5354dda4702d654cd20c8aa162db2fec303200c1759fe10d80a2ff4689c7c71" exitCode=0 Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.300746 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" event={"ID":"bd68bc5e-ac31-4f87-8e40-d7e4d7696106","Type":"ContainerDied","Data":"d5354dda4702d654cd20c8aa162db2fec303200c1759fe10d80a2ff4689c7c71"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.309295 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-k47v5" event={"ID":"c71bf24a-b3c6-410d-9fc2-13ef571982c1","Type":"ContainerStarted","Data":"37a6a354d356fbc65c0aa389f06f37443762e83c827ad5108bfc95c46f0119fc"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.311784 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5spg" event={"ID":"cf54c874-99d3-4bff-bef0-992bcee74002","Type":"ContainerStarted","Data":"44378ecd17a9a5cee0b430b48372a5df54972b88ef1edbafe625a98d98797eab"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.313110 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxrf" event={"ID":"3764589a-2735-4154-bda2-c4e462e62202","Type":"ContainerStarted","Data":"15ac43e2a280bef2202ee0cc9088d3d1ea2d0cc83ee994cb921d8fd63928044d"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.319907 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-kvhtt" event={"ID":"eaffcaf5-95c2-451c-af81-3472b875d910","Type":"ContainerStarted","Data":"96f5604a0f16bbcd9170fca5d26f1d1ea912d65ddb25a5d917440e67ac431bf7"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.321260 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6dbvj" event={"ID":"7b214e59-6ee8-4ac4-8753-76ab84691b78","Type":"ContainerStarted","Data":"2907137f5dae4e6f51b097e24e7f3381d3b03b74ae7cfafdc6fd680cbbd465c9"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.325950 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-68fmb" event={"ID":"09b6d403-50e8-4b92-aa1f-173ace636bca","Type":"ContainerStarted","Data":"4a97c3e698d85ef45f91cb7291a0654b2d40e3e8eb91235f86e87a3dff9fc11e"} Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.339277 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4jzdb"] Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.341738 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rwpdp"] Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.356446 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:29 crc kubenswrapper[4923]: E0321 04:19:29.356691 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:29.856661941 +0000 UTC m=+135.009673038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.356774 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:29 crc kubenswrapper[4923]: E0321 04:19:29.357131 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:29.857117125 +0000 UTC m=+135.010128212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.428579 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-vwt2s" podStartSLOduration=62.428559602 podStartE2EDuration="1m2.428559602s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:29.427687746 +0000 UTC m=+134.580698843" watchObservedRunningTime="2026-03-21 04:19:29.428559602 +0000 UTC m=+134.581570689" Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.459309 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:29 crc kubenswrapper[4923]: E0321 04:19:29.459780 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:29.959754496 +0000 UTC m=+135.112765623 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.560862 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:29 crc kubenswrapper[4923]: E0321 04:19:29.561793 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:30.061780528 +0000 UTC m=+135.214791615 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.661308 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:29 crc kubenswrapper[4923]: E0321 04:19:29.661629 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:30.161615055 +0000 UTC m=+135.314626142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.732727 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-p69c7" podStartSLOduration=62.732710072 podStartE2EDuration="1m2.732710072s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:29.732351052 +0000 UTC m=+134.885362149" watchObservedRunningTime="2026-03-21 04:19:29.732710072 +0000 UTC m=+134.885721159" Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.768507 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:29 crc kubenswrapper[4923]: E0321 04:19:29.768833 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:30.268820253 +0000 UTC m=+135.421831340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.869601 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-22lp8" podStartSLOduration=62.869584877 podStartE2EDuration="1m2.869584877s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:29.807110458 +0000 UTC m=+134.960121545" watchObservedRunningTime="2026-03-21 04:19:29.869584877 +0000 UTC m=+135.022595954" Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.869725 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:29 crc kubenswrapper[4923]: E0321 04:19:29.870255 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:30.370236917 +0000 UTC m=+135.523248004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.870260 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lqw2" podStartSLOduration=62.870256788 podStartE2EDuration="1m2.870256788s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:29.8689976 +0000 UTC m=+135.022008687" watchObservedRunningTime="2026-03-21 04:19:29.870256788 +0000 UTC m=+135.023267875" Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.971288 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:29 crc kubenswrapper[4923]: E0321 04:19:29.971883 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:30.471869408 +0000 UTC m=+135.624880495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.986838 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-94ccs" podStartSLOduration=63.986819005 podStartE2EDuration="1m3.986819005s" podCreationTimestamp="2026-03-21 04:18:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:29.928879842 +0000 UTC m=+135.081890929" watchObservedRunningTime="2026-03-21 04:19:29.986819005 +0000 UTC m=+135.139830092" Mar 21 04:19:29 crc kubenswrapper[4923]: I0321 04:19:29.994342 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" podStartSLOduration=63.994304649 podStartE2EDuration="1m3.994304649s" podCreationTimestamp="2026-03-21 04:18:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:29.992575267 +0000 UTC m=+135.145586354" watchObservedRunningTime="2026-03-21 04:19:29.994304649 +0000 UTC m=+135.147315736" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.015192 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kkp72" podStartSLOduration=64.015176614 podStartE2EDuration="1m4.015176614s" podCreationTimestamp="2026-03-21 04:18:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:30.014292547 +0000 UTC m=+135.167303634" watchObservedRunningTime="2026-03-21 04:19:30.015176614 +0000 UTC m=+135.168187701" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.079786 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:30 crc kubenswrapper[4923]: E0321 04:19:30.080409 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:30.580387575 +0000 UTC m=+135.733398662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.141233 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mnss5" podStartSLOduration=63.141211784 podStartE2EDuration="1m3.141211784s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:30.139423291 +0000 UTC m=+135.292434408" watchObservedRunningTime="2026-03-21 04:19:30.141211784 +0000 UTC m=+135.294222871" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.187457 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:30 crc kubenswrapper[4923]: E0321 04:19:30.187837 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:30.687825439 +0000 UTC m=+135.840836526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.294384 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:30 crc kubenswrapper[4923]: E0321 04:19:30.294863 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:30.794842381 +0000 UTC m=+135.947853468 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.328166 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wfhqh" podStartSLOduration=63.328146337 podStartE2EDuration="1m3.328146337s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:30.296606964 +0000 UTC m=+135.449618051" watchObservedRunningTime="2026-03-21 04:19:30.328146337 +0000 UTC m=+135.481157424" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.341825 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4s8j8" event={"ID":"e45e0807-99b9-42b2-b20c-f383d864fb2d","Type":"ContainerStarted","Data":"487aa13436369d245e5235628c0684078e2c254239ab30abfa54d601fb83cd19"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.341864 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4s8j8" event={"ID":"e45e0807-99b9-42b2-b20c-f383d864fb2d","Type":"ContainerStarted","Data":"0d5e25f20ee0a0fac7b6414933d5b4c8f6a4169e420c80bb81518a02cf1efdac"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.347032 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qb6fp" event={"ID":"20a37f55-0799-4e21-88d8-f50c736f01ef","Type":"ContainerStarted","Data":"bb2a69e09e6fee6dff47c2e9b26b796cbb0e2ed5adea67fa344386aa1bd86b1a"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.352849 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q2rkk" event={"ID":"57f89f97-10d8-45d1-a69f-34d09c5224c9","Type":"ContainerStarted","Data":"9e25d3dcb74e447adabe1a8ef68561148473885fb2cd29d457c90543019c91bc"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.354572 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-p69c7" event={"ID":"bba19ac5-eeb6-4536-93c2-22f110e6ce8a","Type":"ContainerStarted","Data":"90b7238dee1fd8b4d9b33d8a1696c85089d754ae5f3f1b7f60d53bb505a6df32"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.356054 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xw67l" event={"ID":"e790bca6-2cf0-4c92-bba3-e3d609ea17a0","Type":"ContainerStarted","Data":"427dcbe7b28b05c4810b009579ebf330b19a433acf8873b8b3c975941ddeff6b"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.356080 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xw67l" event={"ID":"e790bca6-2cf0-4c92-bba3-e3d609ea17a0","Type":"ContainerStarted","Data":"f6cf9fefc269c46a789e017af0f98d98bdb002c82d86a2b0330b2c79098254e6"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.366051 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dpq9j" event={"ID":"1877b8e8-87db-477a-a619-be2fbdc97d89","Type":"ContainerStarted","Data":"b68d352afdb847b800df4907dfb5ef37004cbefa051b166cd7109db496c30d38"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.366101 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5spg" event={"ID":"cf54c874-99d3-4bff-bef0-992bcee74002","Type":"ContainerStarted","Data":"725db3d6cce82fd8fe4c59107fda0dfd7541b9e8450534b4e50431be83f51408"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.366115 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5spg" event={"ID":"cf54c874-99d3-4bff-bef0-992bcee74002","Type":"ContainerStarted","Data":"9d7435629a74881d4586cdbfc78cff11eb2231bb7f091291fc6cc4e567cc95cd"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.369430 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567775-vxpk6" event={"ID":"b717fc18-bfdb-4e99-8fc0-ea7c905dd908","Type":"ContainerStarted","Data":"ba07bece2567240ec8839146b3df4b0bbe589617fa9bfdf6e666c968feec1bc1"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.372255 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-k47v5" event={"ID":"c71bf24a-b3c6-410d-9fc2-13ef571982c1","Type":"ContainerStarted","Data":"1e3b317fe8031a95205a358d96c856a47827a3fc823695600d4afd6a33158989"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.373848 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-68fmb" event={"ID":"09b6d403-50e8-4b92-aa1f-173ace636bca","Type":"ContainerStarted","Data":"5f8890c1a32cc039da7772d9745b251f3fe94e8d943246d04fca33c0c869002e"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.384941 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rwpdp" event={"ID":"b5571552-4369-46f6-ad29-a54b1f4a7a8f","Type":"ContainerStarted","Data":"8eb49a9905b5302add568be2b29af46d9447fe2fe3b084cf2f4b91e4f713d451"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.384991 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rwpdp" event={"ID":"b5571552-4369-46f6-ad29-a54b1f4a7a8f","Type":"ContainerStarted","Data":"bcc63874000bcea309252ae93fee7f3445a767490b4a457a703c4fb4f9b7cd8d"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.387330 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qm9sh" event={"ID":"8372e439-9ab9-4ba3-80e8-c6d3959187f7","Type":"ContainerStarted","Data":"7dda4216b27f082244914b2a15d3f2150f0e5a727c7b97a60f973c333c60f670"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.389841 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kfnbv" event={"ID":"927f7151-f378-4783-9da9-3c69886850d9","Type":"ContainerStarted","Data":"f112a641401206ecfb651fa25d3ede1b47de65e561d5291627bda714200c8c82"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.391351 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxrf" event={"ID":"3764589a-2735-4154-bda2-c4e462e62202","Type":"ContainerStarted","Data":"510e05caf96266ff5b0b7528365c61491764eb3a393d1d318b186798c48c83d4"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.393664 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-tbxjk" event={"ID":"01bb7225-8917-44ec-894f-c2e237b1826e","Type":"ContainerStarted","Data":"952096141f3020e483ad7940525081e74cc10a2a0371e69a1141a485b5a51f7f"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.395268 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" event={"ID":"ea29f2c8-3e4b-42b5-8161-af03ae16ed23","Type":"ContainerStarted","Data":"2ba06148a7f46d827885f7e4ed6e70759cca146708334eed17056b64323d0203"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.395940 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:30 crc kubenswrapper[4923]: E0321 04:19:30.397359 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:30.897340608 +0000 UTC m=+136.050351695 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.401871 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cf5k2" event={"ID":"61b9ce21-4328-454f-bb40-0c42c3815831","Type":"ContainerStarted","Data":"5fe94811678a248b26371d0e3c34e3b8e0a468f0ac8ba007f711e64ccf94c91e"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.401924 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cf5k2" event={"ID":"61b9ce21-4328-454f-bb40-0c42c3815831","Type":"ContainerStarted","Data":"29f6eda1a4da909492e1b173fbe74b5b27bec86251416d32373968d6ff5b8cb4"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.401936 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cf5k2" event={"ID":"61b9ce21-4328-454f-bb40-0c42c3815831","Type":"ContainerStarted","Data":"0f45d136c9246a659b39ac684d6353acd268d46d16a5765d862b8af4004851d2"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.402664 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cf5k2" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.414983 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bs4dw" event={"ID":"ef1a14b9-d6d5-4d4c-8666-173be56a1538","Type":"ContainerStarted","Data":"b4f21a8745a9d7a44aa49699e63d7066a1e97595b2306394ed5b64e9f8dc2e15"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.415028 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bs4dw" event={"ID":"ef1a14b9-d6d5-4d4c-8666-173be56a1538","Type":"ContainerStarted","Data":"cb659caedad794a657bb7dd1c634a3307210e66ffde2a9cbf530ebcc747d05ff"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.415041 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bs4dw" event={"ID":"ef1a14b9-d6d5-4d4c-8666-173be56a1538","Type":"ContainerStarted","Data":"238e9b62bd55667c1de057f2ee3aedb1f504f659dbe88064026bf4ff75043def"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.421518 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl" event={"ID":"c5f8d415-601f-44f6-a5be-50c0e5c23826","Type":"ContainerStarted","Data":"e6b9c6e9605807d038ed288afad866722fb7d17e9a60bcc501aca51530497e29"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.421551 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl" event={"ID":"c5f8d415-601f-44f6-a5be-50c0e5c23826","Type":"ContainerStarted","Data":"070d01393bc388b2fb2837d9ad544b71e7379529f8cb952b9c51d34e46559d0a"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.427098 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.430662 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4jzdb" event={"ID":"960489fc-897c-40a7-a0be-be20b3b81ff5","Type":"ContainerStarted","Data":"94f7f554801f0ce81d962a36421d35d1c9f109d0888a41bad2a07f5831a2ebf2"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.432596 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-zljcw" event={"ID":"dc511693-2a38-4ca9-bf24-c7f8b7c47972","Type":"ContainerStarted","Data":"3c4e0cff72757825b56070e375aaf621a0f5925b6a6472340262f1f0aec7572d"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.433301 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-zljcw" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.434480 4923 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-c67dl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.434514 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl" podUID="c5f8d415-601f-44f6-a5be-50c0e5c23826" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.439368 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j26hr" event={"ID":"a338651b-7efe-4160-84a4-30471aadc1b7","Type":"ContainerStarted","Data":"08abff472d9a80726f12e7d2f17fe62bca4d134f0428c4ffcfa47ff1e583ff13"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.439400 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j26hr" event={"ID":"a338651b-7efe-4160-84a4-30471aadc1b7","Type":"ContainerStarted","Data":"9d32dc50e54f8bbe2f9c95f68d2ef0ac98a397e3230577b01c478a62baba1824"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.440130 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j26hr" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.442464 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-l24r9" event={"ID":"6d274204-1cbf-4028-8cc7-ae94ad474006","Type":"ContainerStarted","Data":"e01b2d2b82733974aed179fba8278886fccd815af9d5a326585915dbb6ba2916"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.443005 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-l24r9" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.451564 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9czdb" event={"ID":"dfd46889-1472-4303-9b88-bd0b374fc3ac","Type":"ContainerStarted","Data":"ea13f00e68a6dbb59cca2b5d6c8acd85902086ba71ecdb221a10c7b48a8bc990"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.451626 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9czdb" event={"ID":"dfd46889-1472-4303-9b88-bd0b374fc3ac","Type":"ContainerStarted","Data":"f9923917a865b5f4ba162f478d018b0b63c0842a8be7e7023835383564e087cb"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.453574 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv8h5" event={"ID":"972f1013-d1cb-44f2-b79d-baacfef4e939","Type":"ContainerStarted","Data":"34e8ec86e7d3990ca7fa82645cbf321c28b9ddf0682202d17097f2f109f08c2d"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.454449 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv8h5" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.456571 4923 generic.go:334] "Generic (PLEG): container finished" podID="6df34760-baa5-45b8-859e-c9935c6f5656" containerID="6699e63fa9f0a392b5f592d1334583a8aa829cd7e396abc340f44e46a28a0cec" exitCode=0 Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.456971 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-blnd8" event={"ID":"6df34760-baa5-45b8-859e-c9935c6f5656","Type":"ContainerDied","Data":"6699e63fa9f0a392b5f592d1334583a8aa829cd7e396abc340f44e46a28a0cec"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.465568 4923 patch_prober.go:28] interesting pod/console-operator-58897d9998-zljcw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/readyz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.465612 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-zljcw" podUID="dc511693-2a38-4ca9-bf24-c7f8b7c47972" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/readyz\": dial tcp 10.217.0.36:8443: connect: connection refused" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.470684 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6dbvj" event={"ID":"7b214e59-6ee8-4ac4-8753-76ab84691b78","Type":"ContainerStarted","Data":"77074e3edf32bddd112808461334844af8901aee37c5853d9b54324cf33120b3"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.472092 4923 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qv8h5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.472099 4923 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-j26hr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" start-of-body= Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.472129 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv8h5" podUID="972f1013-d1cb-44f2-b79d-baacfef4e939" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.472150 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j26hr" podUID="a338651b-7efe-4160-84a4-30471aadc1b7" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.487675 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k2q79" event={"ID":"b56d396c-a7a2-466c-8c2a-973f24a9e3f1","Type":"ContainerStarted","Data":"c74f99ed70a9cb22e2b1559d5e6d1e30c93c87c9d6a6af37e6dc05eb09223da8"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.487925 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k2q79" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.490224 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gdjq" event={"ID":"e2c285ca-61ae-40f2-b349-7028faa03a00","Type":"ContainerStarted","Data":"4f3448415db330e40febd126a9844e266a9d0bb87fecc967172cd60206ad25a9"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.493254 4923 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-k2q79 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.493293 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k2q79" podUID="b56d396c-a7a2-466c-8c2a-973f24a9e3f1" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.494332 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4lqw2" event={"ID":"93ea7292-1b26-4dc3-a5ba-d9b799c22264","Type":"ContainerStarted","Data":"a1ebaa99c893d65bbc5c5c5da15c6779925ebd3f7ee645333e7b3d993c53e186"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.498451 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:30 crc kubenswrapper[4923]: E0321 04:19:30.500541 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:31.000521445 +0000 UTC m=+136.153532532 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.502361 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-2hqpc" podStartSLOduration=63.502346699 podStartE2EDuration="1m3.502346699s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:30.448270841 +0000 UTC m=+135.601281928" watchObservedRunningTime="2026-03-21 04:19:30.502346699 +0000 UTC m=+135.655357786" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.511295 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-kvhtt" event={"ID":"eaffcaf5-95c2-451c-af81-3472b875d910","Type":"ContainerStarted","Data":"61cba09b5bbe9be6242eeb9bd1b63aa79cbc7339a45925ff7d62483144e6ed49"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.545955 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-l24r9" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.547635 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8bxdt" event={"ID":"39dc2e68-1df7-426f-aa50-15a542f6995b","Type":"ContainerStarted","Data":"eb28bff817234d4421e9a8e6a6071fdcf9037af713796c7c75118de6cfe81d4c"} Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.547681 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8bxdt" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.548458 4923 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hqpc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.548531 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hqpc" podUID="3f1486ed-5a65-43a1-9a45-c02318d4d831" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.552863 4923 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-5jpr6 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.552922 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" podUID="9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.567402 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-vwt2s" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.567957 4923 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8bxdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.569444 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-8bxdt" podUID="39dc2e68-1df7-426f-aa50-15a542f6995b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.583209 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q2rkk" podStartSLOduration=63.583189907 podStartE2EDuration="1m3.583189907s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:30.548652744 +0000 UTC m=+135.701663831" watchObservedRunningTime="2026-03-21 04:19:30.583189907 +0000 UTC m=+135.736200984" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.601519 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:30 crc kubenswrapper[4923]: E0321 04:19:30.603825 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:31.103812534 +0000 UTC m=+136.256823621 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.619029 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-l24r9" podStartSLOduration=6.619013929 podStartE2EDuration="6.619013929s" podCreationTimestamp="2026-03-21 04:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:30.612384961 +0000 UTC m=+135.765396048" watchObservedRunningTime="2026-03-21 04:19:30.619013929 +0000 UTC m=+135.772025016" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.623377 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dpq9j" podStartSLOduration=63.623362959 podStartE2EDuration="1m3.623362959s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:30.581925489 +0000 UTC m=+135.734936576" watchObservedRunningTime="2026-03-21 04:19:30.623362959 +0000 UTC m=+135.776374046" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.658200 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-k47v5" podStartSLOduration=6.658181681 podStartE2EDuration="6.658181681s" podCreationTimestamp="2026-03-21 04:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:30.657626924 +0000 UTC m=+135.810638011" watchObservedRunningTime="2026-03-21 04:19:30.658181681 +0000 UTC m=+135.811192768" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.703193 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:30 crc kubenswrapper[4923]: E0321 04:19:30.704680 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:31.204649231 +0000 UTC m=+136.357660318 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.713475 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-4s8j8" podStartSLOduration=63.713454965 podStartE2EDuration="1m3.713454965s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:30.711258819 +0000 UTC m=+135.864269906" watchObservedRunningTime="2026-03-21 04:19:30.713454965 +0000 UTC m=+135.866466052" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.747580 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cf5k2" podStartSLOduration=63.747552385 podStartE2EDuration="1m3.747552385s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:30.745290957 +0000 UTC m=+135.898302044" watchObservedRunningTime="2026-03-21 04:19:30.747552385 +0000 UTC m=+135.900563472" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.788055 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl" podStartSLOduration=63.788042016 podStartE2EDuration="1m3.788042016s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:30.785950974 +0000 UTC m=+135.938962061" watchObservedRunningTime="2026-03-21 04:19:30.788042016 +0000 UTC m=+135.941053103" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.805536 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:30 crc kubenswrapper[4923]: E0321 04:19:30.805986 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:31.305970443 +0000 UTC m=+136.458981530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.809712 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j26hr" podStartSLOduration=63.809696894 podStartE2EDuration="1m3.809696894s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:30.808675193 +0000 UTC m=+135.961686280" watchObservedRunningTime="2026-03-21 04:19:30.809696894 +0000 UTC m=+135.962707981" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.902186 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-kfnbv" podStartSLOduration=6.90215419 podStartE2EDuration="6.90215419s" podCreationTimestamp="2026-03-21 04:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:30.872767491 +0000 UTC m=+136.025778568" watchObservedRunningTime="2026-03-21 04:19:30.90215419 +0000 UTC m=+136.055165277" Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.906837 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:30 crc kubenswrapper[4923]: E0321 04:19:30.907140 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:31.407118129 +0000 UTC m=+136.560129216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.907228 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:30 crc kubenswrapper[4923]: E0321 04:19:30.907603 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:31.407591853 +0000 UTC m=+136.560602940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:30 crc kubenswrapper[4923]: I0321 04:19:30.926802 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-qb6fp" podStartSLOduration=63.926787387 podStartE2EDuration="1m3.926787387s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:30.924306693 +0000 UTC m=+136.077317780" watchObservedRunningTime="2026-03-21 04:19:30.926787387 +0000 UTC m=+136.079798474" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.007838 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:31 crc kubenswrapper[4923]: E0321 04:19:31.008010 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:31.507983437 +0000 UTC m=+136.660994574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.016262 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-tbxjk" podStartSLOduration=64.016243384 podStartE2EDuration="1m4.016243384s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:30.985403841 +0000 UTC m=+136.138414918" watchObservedRunningTime="2026-03-21 04:19:31.016243384 +0000 UTC m=+136.169254471" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.048975 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vsxrf" podStartSLOduration=64.048959113 podStartE2EDuration="1m4.048959113s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:31.048421366 +0000 UTC m=+136.201432453" watchObservedRunningTime="2026-03-21 04:19:31.048959113 +0000 UTC m=+136.201970200" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.068719 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-tbxjk" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.070588 4923 patch_prober.go:28] interesting pod/router-default-5444994796-tbxjk container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.070632 4923 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tbxjk" podUID="01bb7225-8917-44ec-894f-c2e237b1826e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.104513 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k2q79" podStartSLOduration=64.104496944 podStartE2EDuration="1m4.104496944s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:31.100940378 +0000 UTC m=+136.253951465" watchObservedRunningTime="2026-03-21 04:19:31.104496944 +0000 UTC m=+136.257508031" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.109782 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:31 crc kubenswrapper[4923]: E0321 04:19:31.110092 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:31.610079831 +0000 UTC m=+136.763090918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.183671 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-zljcw" podStartSLOduration=64.183648742 podStartE2EDuration="1m4.183648742s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:31.181864599 +0000 UTC m=+136.334875686" watchObservedRunningTime="2026-03-21 04:19:31.183648742 +0000 UTC m=+136.336659839" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.211284 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:31 crc kubenswrapper[4923]: E0321 04:19:31.211442 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:31.711417513 +0000 UTC m=+136.864428600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.211598 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:31 crc kubenswrapper[4923]: E0321 04:19:31.211903 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:31.711895238 +0000 UTC m=+136.864906325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.226220 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qm9sh" podStartSLOduration=65.226201455 podStartE2EDuration="1m5.226201455s" podCreationTimestamp="2026-03-21 04:18:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:31.225625698 +0000 UTC m=+136.378636785" watchObservedRunningTime="2026-03-21 04:19:31.226201455 +0000 UTC m=+136.379212542" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.226798 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29567775-vxpk6" podStartSLOduration=64.226778693 podStartE2EDuration="1m4.226778693s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:31.20160074 +0000 UTC m=+136.354611837" watchObservedRunningTime="2026-03-21 04:19:31.226778693 +0000 UTC m=+136.379789780" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.259509 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv8h5" podStartSLOduration=64.259494722 podStartE2EDuration="1m4.259494722s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:31.25610164 +0000 UTC m=+136.409112727" watchObservedRunningTime="2026-03-21 04:19:31.259494722 +0000 UTC m=+136.412505809" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.287371 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bs4dw" podStartSLOduration=64.287349395 podStartE2EDuration="1m4.287349395s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:31.284709166 +0000 UTC m=+136.437720273" watchObservedRunningTime="2026-03-21 04:19:31.287349395 +0000 UTC m=+136.440360492" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.313007 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:31 crc kubenswrapper[4923]: E0321 04:19:31.313225 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:31.813199699 +0000 UTC m=+136.966210786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.313375 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:31 crc kubenswrapper[4923]: E0321 04:19:31.313707 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:31.813695883 +0000 UTC m=+136.966706960 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.317870 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6dbvj" podStartSLOduration=64.317855608 podStartE2EDuration="1m4.317855608s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:31.315728654 +0000 UTC m=+136.468739751" watchObservedRunningTime="2026-03-21 04:19:31.317855608 +0000 UTC m=+136.470866695" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.358735 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" podStartSLOduration=64.35871668 podStartE2EDuration="1m4.35871668s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:31.357327089 +0000 UTC m=+136.510338176" watchObservedRunningTime="2026-03-21 04:19:31.35871668 +0000 UTC m=+136.511727767" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.414695 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:31 crc kubenswrapper[4923]: E0321 04:19:31.414858 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:31.914831939 +0000 UTC m=+137.067843026 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.414953 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:31 crc kubenswrapper[4923]: E0321 04:19:31.415336 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:31.915307193 +0000 UTC m=+137.068318280 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.416187 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rwpdp" podStartSLOduration=64.416147369 podStartE2EDuration="1m4.416147369s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:31.414896061 +0000 UTC m=+136.567907158" watchObservedRunningTime="2026-03-21 04:19:31.416147369 +0000 UTC m=+136.569158456" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.417680 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xw67l" podStartSLOduration=64.417673414 podStartE2EDuration="1m4.417673414s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:31.374997527 +0000 UTC m=+136.528008614" watchObservedRunningTime="2026-03-21 04:19:31.417673414 +0000 UTC m=+136.570684501" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.459793 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-68fmb" podStartSLOduration=64.459771264 podStartE2EDuration="1m4.459771264s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:31.453163766 +0000 UTC m=+136.606174853" watchObservedRunningTime="2026-03-21 04:19:31.459771264 +0000 UTC m=+136.612782351" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.505707 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8gdjq" podStartSLOduration=64.505693308 podStartE2EDuration="1m4.505693308s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:31.503794361 +0000 UTC m=+136.656805448" watchObservedRunningTime="2026-03-21 04:19:31.505693308 +0000 UTC m=+136.658704395" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.515765 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:31 crc kubenswrapper[4923]: E0321 04:19:31.516084 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:32.016069608 +0000 UTC m=+137.169080695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.567257 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-blnd8" event={"ID":"6df34760-baa5-45b8-859e-c9935c6f5656","Type":"ContainerStarted","Data":"23099788e39ea8bf5435f6ed898136f1b49d7a1762bc6a7f41b1dac6aea40a50"} Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.567382 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-blnd8" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.572886 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" event={"ID":"bd68bc5e-ac31-4f87-8e40-d7e4d7696106","Type":"ContainerStarted","Data":"6facfa52a22b32ec818a40ba84fce0ee82f55d76607d6bf8201daaa63ec1782b"} Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.572933 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" event={"ID":"bd68bc5e-ac31-4f87-8e40-d7e4d7696106","Type":"ContainerStarted","Data":"d0481b3215e33fc9c6ffbfcc743dec57c71d1d23781ef7d01de3613205a7fa21"} Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.578611 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4jzdb" event={"ID":"960489fc-897c-40a7-a0be-be20b3b81ff5","Type":"ContainerStarted","Data":"4ca9e46b7696ae8274f270155fd8c9f4efb7e53c8efc32f62319f5ff0ce9d22f"} Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.578666 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4jzdb" event={"ID":"960489fc-897c-40a7-a0be-be20b3b81ff5","Type":"ContainerStarted","Data":"1da154c0d23b3888469aed9cf311668f79c0aae1758673cfc4e456c10578197b"} Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.582887 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9czdb" event={"ID":"dfd46889-1472-4303-9b88-bd0b374fc3ac","Type":"ContainerStarted","Data":"9c93cf0435dd3d3fec7a53f99f047e050bd39e4e0976d348a30bcc5709471623"} Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.583258 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-9czdb" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.588813 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qgp9s" event={"ID":"8ae3d7d8-8466-4519-91c0-48c7230d1388","Type":"ContainerStarted","Data":"e583ccac918c27c560e455f402a4d7f4378e1833d720b269ec78239789b7502f"} Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.592737 4923 patch_prober.go:28] interesting pod/console-operator-58897d9998-zljcw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/readyz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.592767 4923 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qv8h5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.592815 4923 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8bxdt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.592855 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-8bxdt" podUID="39dc2e68-1df7-426f-aa50-15a542f6995b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.592773 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-zljcw" podUID="dc511693-2a38-4ca9-bf24-c7f8b7c47972" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/readyz\": dial tcp 10.217.0.36:8443: connect: connection refused" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.595271 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-kvhtt" podStartSLOduration=64.595254837 podStartE2EDuration="1m4.595254837s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:31.593219057 +0000 UTC m=+136.746230144" watchObservedRunningTime="2026-03-21 04:19:31.595254837 +0000 UTC m=+136.748265924" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.596361 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv8h5" podUID="972f1013-d1cb-44f2-b79d-baacfef4e939" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.592768 4923 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-k2q79 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.596422 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k2q79" podUID="b56d396c-a7a2-466c-8c2a-973f24a9e3f1" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.592773 4923 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-j26hr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" start-of-body= Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.596448 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j26hr" podUID="a338651b-7efe-4160-84a4-30471aadc1b7" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.596729 4923 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-c67dl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.596777 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl" podUID="c5f8d415-601f-44f6-a5be-50c0e5c23826" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.598137 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-s5spg" podStartSLOduration=64.598126153 podStartE2EDuration="1m4.598126153s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:31.556474757 +0000 UTC m=+136.709485844" watchObservedRunningTime="2026-03-21 04:19:31.598126153 +0000 UTC m=+136.751137240" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.617613 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:31 crc kubenswrapper[4923]: E0321 04:19:31.618434 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:32.11841995 +0000 UTC m=+137.271431037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.663703 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-8bxdt" podStartSLOduration=64.663688125 podStartE2EDuration="1m4.663688125s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:31.655009275 +0000 UTC m=+136.808020352" watchObservedRunningTime="2026-03-21 04:19:31.663688125 +0000 UTC m=+136.816699212" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.718251 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:31 crc kubenswrapper[4923]: E0321 04:19:31.719923 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:32.219907627 +0000 UTC m=+137.372918714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.786514 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-4jzdb" podStartSLOduration=64.786498779 podStartE2EDuration="1m4.786498779s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:31.786218841 +0000 UTC m=+136.939229918" watchObservedRunningTime="2026-03-21 04:19:31.786498779 +0000 UTC m=+136.939509866" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.820342 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:31 crc kubenswrapper[4923]: E0321 04:19:31.820663 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:32.320651631 +0000 UTC m=+137.473662718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.844278 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-9czdb" podStartSLOduration=7.844265658 podStartE2EDuration="7.844265658s" podCreationTimestamp="2026-03-21 04:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:31.841926028 +0000 UTC m=+136.994937115" watchObservedRunningTime="2026-03-21 04:19:31.844265658 +0000 UTC m=+136.997276745" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.898545 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" podStartSLOduration=64.898527581 podStartE2EDuration="1m4.898527581s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:31.897219642 +0000 UTC m=+137.050230729" watchObservedRunningTime="2026-03-21 04:19:31.898527581 +0000 UTC m=+137.051538668" Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.921744 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:31 crc kubenswrapper[4923]: E0321 04:19:31.921946 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:32.421920371 +0000 UTC m=+137.574931458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.922044 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:31 crc kubenswrapper[4923]: E0321 04:19:31.922358 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:32.422346934 +0000 UTC m=+137.575358021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:31 crc kubenswrapper[4923]: I0321 04:19:31.944955 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-blnd8" podStartSLOduration=64.94493473 podStartE2EDuration="1m4.94493473s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:31.933372954 +0000 UTC m=+137.086384041" watchObservedRunningTime="2026-03-21 04:19:31.94493473 +0000 UTC m=+137.097945817" Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.023914 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:32 crc kubenswrapper[4923]: E0321 04:19:32.024026 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:32.524008515 +0000 UTC m=+137.677019602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.024243 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:32 crc kubenswrapper[4923]: E0321 04:19:32.024516 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:32.52450918 +0000 UTC m=+137.677520267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.077278 4923 patch_prober.go:28] interesting pod/router-default-5444994796-tbxjk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 21 04:19:32 crc kubenswrapper[4923]: [-]has-synced failed: reason withheld Mar 21 04:19:32 crc kubenswrapper[4923]: [+]process-running ok Mar 21 04:19:32 crc kubenswrapper[4923]: healthz check failed Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.077341 4923 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tbxjk" podUID="01bb7225-8917-44ec-894f-c2e237b1826e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.116724 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.116782 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.125581 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:32 crc kubenswrapper[4923]: E0321 04:19:32.125894 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:32.625879173 +0000 UTC m=+137.778890260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.154808 4923 ???:1] "http: TLS handshake error from 192.168.126.11:38402: no serving certificate available for the kubelet" Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.226701 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:32 crc kubenswrapper[4923]: E0321 04:19:32.227226 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:32.727065431 +0000 UTC m=+137.880076518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.244035 4923 ???:1] "http: TLS handshake error from 192.168.126.11:38408: no serving certificate available for the kubelet" Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.265516 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-l24r9"] Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.277645 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.277845 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.279261 4923 patch_prober.go:28] interesting pod/apiserver-76f77b778f-5bhvd container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.279315 4923 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" podUID="bd68bc5e-ac31-4f87-8e40-d7e4d7696106" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.328234 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:32 crc kubenswrapper[4923]: E0321 04:19:32.328650 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:32.82863239 +0000 UTC m=+137.981643477 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.346152 4923 ???:1] "http: TLS handshake error from 192.168.126.11:38416: no serving certificate available for the kubelet" Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.429906 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:32 crc kubenswrapper[4923]: E0321 04:19:32.430213 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:32.930201589 +0000 UTC m=+138.083212676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.471584 4923 ???:1] "http: TLS handshake error from 192.168.126.11:38420: no serving certificate available for the kubelet" Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.530457 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:32 crc kubenswrapper[4923]: E0321 04:19:32.530779 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:33.030755237 +0000 UTC m=+138.183766324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.557160 4923 ???:1] "http: TLS handshake error from 192.168.126.11:38436: no serving certificate available for the kubelet" Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.594835 4923 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-5jpr6 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.594876 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" podUID="9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.605385 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl" Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.631731 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:32 crc kubenswrapper[4923]: E0321 04:19:32.632805 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:33.13278325 +0000 UTC m=+138.285794437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.703726 4923 ???:1] "http: TLS handshake error from 192.168.126.11:38448: no serving certificate available for the kubelet" Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.733100 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:32 crc kubenswrapper[4923]: E0321 04:19:32.734455 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:33.234427561 +0000 UTC m=+138.387438648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.811327 4923 ???:1] "http: TLS handshake error from 192.168.126.11:38458: no serving certificate available for the kubelet" Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.834775 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:32 crc kubenswrapper[4923]: E0321 04:19:32.835134 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:33.335118233 +0000 UTC m=+138.488129320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.841523 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-j26hr" Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.936211 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:32 crc kubenswrapper[4923]: E0321 04:19:32.936397 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:33.436371543 +0000 UTC m=+138.589382630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.936488 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:32 crc kubenswrapper[4923]: E0321 04:19:32.936749 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:33.436736174 +0000 UTC m=+138.589747261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:32 crc kubenswrapper[4923]: I0321 04:19:32.993682 4923 ???:1] "http: TLS handshake error from 192.168.126.11:38462: no serving certificate available for the kubelet" Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.037898 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:33 crc kubenswrapper[4923]: E0321 04:19:33.038061 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:33.538023204 +0000 UTC m=+138.691034291 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.038255 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:33 crc kubenswrapper[4923]: E0321 04:19:33.038559 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:33.53854681 +0000 UTC m=+138.691557897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.072397 4923 patch_prober.go:28] interesting pod/router-default-5444994796-tbxjk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 21 04:19:33 crc kubenswrapper[4923]: [-]has-synced failed: reason withheld Mar 21 04:19:33 crc kubenswrapper[4923]: [+]process-running ok Mar 21 04:19:33 crc kubenswrapper[4923]: healthz check failed Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.072457 4923 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tbxjk" podUID="01bb7225-8917-44ec-894f-c2e237b1826e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.139614 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:33 crc kubenswrapper[4923]: E0321 04:19:33.139809 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:33.639785369 +0000 UTC m=+138.792796456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.139888 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:33 crc kubenswrapper[4923]: E0321 04:19:33.140204 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:33.640191691 +0000 UTC m=+138.793202778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.240736 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:33 crc kubenswrapper[4923]: E0321 04:19:33.240921 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:33.740896024 +0000 UTC m=+138.893907101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.240964 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:33 crc kubenswrapper[4923]: E0321 04:19:33.241288 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:33.741281286 +0000 UTC m=+138.894292373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.302620 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.341919 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:33 crc kubenswrapper[4923]: E0321 04:19:33.342097 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:33.842071641 +0000 UTC m=+138.995082728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.342303 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:33 crc kubenswrapper[4923]: E0321 04:19:33.342613 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:33.842605667 +0000 UTC m=+138.995616754 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.442816 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:33 crc kubenswrapper[4923]: E0321 04:19:33.442989 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:33.94296606 +0000 UTC m=+139.095977147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.443229 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:33 crc kubenswrapper[4923]: E0321 04:19:33.443646 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:33.94363793 +0000 UTC m=+139.096649017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.544692 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:33 crc kubenswrapper[4923]: E0321 04:19:33.544854 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:34.044830618 +0000 UTC m=+139.197841695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.545028 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:33 crc kubenswrapper[4923]: E0321 04:19:33.545355 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:34.045315182 +0000 UTC m=+139.198326269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.596228 4923 patch_prober.go:28] interesting pod/console-operator-58897d9998-zljcw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.596279 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-zljcw" podUID="dc511693-2a38-4ca9-bf24-c7f8b7c47972" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.599591 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qgp9s" event={"ID":"8ae3d7d8-8466-4519-91c0-48c7230d1388","Type":"ContainerStarted","Data":"ea3ffdc1a76db3d8c286dacbbc41ac353c54140b6ebbe8fec374118cc56ca6a4"} Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.599650 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qgp9s" event={"ID":"8ae3d7d8-8466-4519-91c0-48c7230d1388","Type":"ContainerStarted","Data":"ed8628f5c59b456e63abaf8e0beb4289cbb7150139eb07e192619bf45598972c"} Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.599765 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-l24r9" podUID="6d274204-1cbf-4028-8cc7-ae94ad474006" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://e01b2d2b82733974aed179fba8278886fccd815af9d5a326585915dbb6ba2916" gracePeriod=30 Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.646737 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:33 crc kubenswrapper[4923]: E0321 04:19:33.648544 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:34.14852421 +0000 UTC m=+139.301535297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.729402 4923 ???:1] "http: TLS handshake error from 192.168.126.11:38478: no serving certificate available for the kubelet" Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.748715 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:33 crc kubenswrapper[4923]: E0321 04:19:33.749108 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:34.249091779 +0000 UTC m=+139.402102866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.788288 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vwt2s"] Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.788474 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-vwt2s" podUID="7d3be6cc-22e0-4a96-9fc2-aed5fe092a53" containerName="controller-manager" containerID="cri-o://2c9a3967bbc4f3f29dcab9b6fb3f1cae8e338cbcb4730b2c92f83a0c4f210ae2" gracePeriod=30 Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.828460 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hq8bw"] Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.829431 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hq8bw" Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.831045 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.840544 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hq8bw"] Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.849302 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:33 crc kubenswrapper[4923]: E0321 04:19:33.849717 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:34.349701359 +0000 UTC m=+139.502712446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.854163 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl"] Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.881271 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.950524 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dd7868c-7d3e-43de-b0f9-1ab51280fce5-utilities\") pod \"certified-operators-hq8bw\" (UID: \"3dd7868c-7d3e-43de-b0f9-1ab51280fce5\") " pod="openshift-marketplace/certified-operators-hq8bw" Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.950598 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.950654 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dd7868c-7d3e-43de-b0f9-1ab51280fce5-catalog-content\") pod \"certified-operators-hq8bw\" (UID: \"3dd7868c-7d3e-43de-b0f9-1ab51280fce5\") " pod="openshift-marketplace/certified-operators-hq8bw" Mar 21 04:19:33 crc kubenswrapper[4923]: I0321 04:19:33.950674 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zbhj\" (UniqueName: \"kubernetes.io/projected/3dd7868c-7d3e-43de-b0f9-1ab51280fce5-kube-api-access-8zbhj\") pod \"certified-operators-hq8bw\" (UID: \"3dd7868c-7d3e-43de-b0f9-1ab51280fce5\") " pod="openshift-marketplace/certified-operators-hq8bw" Mar 21 04:19:33 crc kubenswrapper[4923]: E0321 04:19:33.950969 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:34.450951208 +0000 UTC m=+139.603962385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.003961 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9grdl"] Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.005070 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9grdl" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.007361 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.019403 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9grdl"] Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.051938 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.052140 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd250302-91b0-41c4-b138-89559a78d375-utilities\") pod \"community-operators-9grdl\" (UID: \"dd250302-91b0-41c4-b138-89559a78d375\") " pod="openshift-marketplace/community-operators-9grdl" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.052199 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dd7868c-7d3e-43de-b0f9-1ab51280fce5-catalog-content\") pod \"certified-operators-hq8bw\" (UID: \"3dd7868c-7d3e-43de-b0f9-1ab51280fce5\") " pod="openshift-marketplace/certified-operators-hq8bw" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.052219 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zbhj\" (UniqueName: \"kubernetes.io/projected/3dd7868c-7d3e-43de-b0f9-1ab51280fce5-kube-api-access-8zbhj\") pod \"certified-operators-hq8bw\" (UID: \"3dd7868c-7d3e-43de-b0f9-1ab51280fce5\") " pod="openshift-marketplace/certified-operators-hq8bw" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.052267 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd250302-91b0-41c4-b138-89559a78d375-catalog-content\") pod \"community-operators-9grdl\" (UID: \"dd250302-91b0-41c4-b138-89559a78d375\") " pod="openshift-marketplace/community-operators-9grdl" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.052293 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dd7868c-7d3e-43de-b0f9-1ab51280fce5-utilities\") pod \"certified-operators-hq8bw\" (UID: \"3dd7868c-7d3e-43de-b0f9-1ab51280fce5\") " pod="openshift-marketplace/certified-operators-hq8bw" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.052312 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z6bp\" (UniqueName: \"kubernetes.io/projected/dd250302-91b0-41c4-b138-89559a78d375-kube-api-access-4z6bp\") pod \"community-operators-9grdl\" (UID: \"dd250302-91b0-41c4-b138-89559a78d375\") " pod="openshift-marketplace/community-operators-9grdl" Mar 21 04:19:34 crc kubenswrapper[4923]: E0321 04:19:34.052439 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:34.552424864 +0000 UTC m=+139.705435951 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.052762 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dd7868c-7d3e-43de-b0f9-1ab51280fce5-catalog-content\") pod \"certified-operators-hq8bw\" (UID: \"3dd7868c-7d3e-43de-b0f9-1ab51280fce5\") " pod="openshift-marketplace/certified-operators-hq8bw" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.053195 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dd7868c-7d3e-43de-b0f9-1ab51280fce5-utilities\") pod \"certified-operators-hq8bw\" (UID: \"3dd7868c-7d3e-43de-b0f9-1ab51280fce5\") " pod="openshift-marketplace/certified-operators-hq8bw" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.071139 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zbhj\" (UniqueName: \"kubernetes.io/projected/3dd7868c-7d3e-43de-b0f9-1ab51280fce5-kube-api-access-8zbhj\") pod \"certified-operators-hq8bw\" (UID: \"3dd7868c-7d3e-43de-b0f9-1ab51280fce5\") " pod="openshift-marketplace/certified-operators-hq8bw" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.072639 4923 patch_prober.go:28] interesting pod/router-default-5444994796-tbxjk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 21 04:19:34 crc kubenswrapper[4923]: [-]has-synced failed: reason withheld Mar 21 04:19:34 crc kubenswrapper[4923]: [+]process-running ok Mar 21 04:19:34 crc kubenswrapper[4923]: healthz check failed Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.072680 4923 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tbxjk" podUID="01bb7225-8917-44ec-894f-c2e237b1826e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.087336 4923 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.140714 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hq8bw" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.153491 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z6bp\" (UniqueName: \"kubernetes.io/projected/dd250302-91b0-41c4-b138-89559a78d375-kube-api-access-4z6bp\") pod \"community-operators-9grdl\" (UID: \"dd250302-91b0-41c4-b138-89559a78d375\") " pod="openshift-marketplace/community-operators-9grdl" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.153547 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.153576 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd250302-91b0-41c4-b138-89559a78d375-utilities\") pod \"community-operators-9grdl\" (UID: \"dd250302-91b0-41c4-b138-89559a78d375\") " pod="openshift-marketplace/community-operators-9grdl" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.153673 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd250302-91b0-41c4-b138-89559a78d375-catalog-content\") pod \"community-operators-9grdl\" (UID: \"dd250302-91b0-41c4-b138-89559a78d375\") " pod="openshift-marketplace/community-operators-9grdl" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.154078 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd250302-91b0-41c4-b138-89559a78d375-catalog-content\") pod \"community-operators-9grdl\" (UID: \"dd250302-91b0-41c4-b138-89559a78d375\") " pod="openshift-marketplace/community-operators-9grdl" Mar 21 04:19:34 crc kubenswrapper[4923]: E0321 04:19:34.154340 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:34.654308042 +0000 UTC m=+139.807319129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.154957 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd250302-91b0-41c4-b138-89559a78d375-utilities\") pod \"community-operators-9grdl\" (UID: \"dd250302-91b0-41c4-b138-89559a78d375\") " pod="openshift-marketplace/community-operators-9grdl" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.187349 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z6bp\" (UniqueName: \"kubernetes.io/projected/dd250302-91b0-41c4-b138-89559a78d375-kube-api-access-4z6bp\") pod \"community-operators-9grdl\" (UID: \"dd250302-91b0-41c4-b138-89559a78d375\") " pod="openshift-marketplace/community-operators-9grdl" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.207058 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m57bq"] Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.210500 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m57bq" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.211346 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m57bq"] Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.237114 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vwt2s" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.255888 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.256287 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d52ff5-e444-4d7d-952f-6d95888a7791-catalog-content\") pod \"certified-operators-m57bq\" (UID: \"e8d52ff5-e444-4d7d-952f-6d95888a7791\") " pod="openshift-marketplace/certified-operators-m57bq" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.256337 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d52ff5-e444-4d7d-952f-6d95888a7791-utilities\") pod \"certified-operators-m57bq\" (UID: \"e8d52ff5-e444-4d7d-952f-6d95888a7791\") " pod="openshift-marketplace/certified-operators-m57bq" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.256394 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lxb5\" (UniqueName: \"kubernetes.io/projected/e8d52ff5-e444-4d7d-952f-6d95888a7791-kube-api-access-4lxb5\") pod \"certified-operators-m57bq\" (UID: \"e8d52ff5-e444-4d7d-952f-6d95888a7791\") " pod="openshift-marketplace/certified-operators-m57bq" Mar 21 04:19:34 crc kubenswrapper[4923]: E0321 04:19:34.256573 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:34.756556541 +0000 UTC m=+139.909567628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.318699 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9grdl" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.360725 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-proxy-ca-bundles\") pod \"7d3be6cc-22e0-4a96-9fc2-aed5fe092a53\" (UID: \"7d3be6cc-22e0-4a96-9fc2-aed5fe092a53\") " Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.360779 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-serving-cert\") pod \"7d3be6cc-22e0-4a96-9fc2-aed5fe092a53\" (UID: \"7d3be6cc-22e0-4a96-9fc2-aed5fe092a53\") " Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.360860 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxhst\" (UniqueName: \"kubernetes.io/projected/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-kube-api-access-fxhst\") pod \"7d3be6cc-22e0-4a96-9fc2-aed5fe092a53\" (UID: \"7d3be6cc-22e0-4a96-9fc2-aed5fe092a53\") " Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.360885 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-config\") pod \"7d3be6cc-22e0-4a96-9fc2-aed5fe092a53\" (UID: \"7d3be6cc-22e0-4a96-9fc2-aed5fe092a53\") " Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.360938 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-client-ca\") pod \"7d3be6cc-22e0-4a96-9fc2-aed5fe092a53\" (UID: \"7d3be6cc-22e0-4a96-9fc2-aed5fe092a53\") " Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.361064 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lxb5\" (UniqueName: \"kubernetes.io/projected/e8d52ff5-e444-4d7d-952f-6d95888a7791-kube-api-access-4lxb5\") pod \"certified-operators-m57bq\" (UID: \"e8d52ff5-e444-4d7d-952f-6d95888a7791\") " pod="openshift-marketplace/certified-operators-m57bq" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.361153 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.361179 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d52ff5-e444-4d7d-952f-6d95888a7791-catalog-content\") pod \"certified-operators-m57bq\" (UID: \"e8d52ff5-e444-4d7d-952f-6d95888a7791\") " pod="openshift-marketplace/certified-operators-m57bq" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.361201 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d52ff5-e444-4d7d-952f-6d95888a7791-utilities\") pod \"certified-operators-m57bq\" (UID: \"e8d52ff5-e444-4d7d-952f-6d95888a7791\") " pod="openshift-marketplace/certified-operators-m57bq" Mar 21 04:19:34 crc kubenswrapper[4923]: E0321 04:19:34.361701 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-21 04:19:34.861686236 +0000 UTC m=+140.014697323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bmnqq" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.362152 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-client-ca" (OuterVolumeSpecName: "client-ca") pod "7d3be6cc-22e0-4a96-9fc2-aed5fe092a53" (UID: "7d3be6cc-22e0-4a96-9fc2-aed5fe092a53"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.362163 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7d3be6cc-22e0-4a96-9fc2-aed5fe092a53" (UID: "7d3be6cc-22e0-4a96-9fc2-aed5fe092a53"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.362642 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-config" (OuterVolumeSpecName: "config") pod "7d3be6cc-22e0-4a96-9fc2-aed5fe092a53" (UID: "7d3be6cc-22e0-4a96-9fc2-aed5fe092a53"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.362968 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d52ff5-e444-4d7d-952f-6d95888a7791-catalog-content\") pod \"certified-operators-m57bq\" (UID: \"e8d52ff5-e444-4d7d-952f-6d95888a7791\") " pod="openshift-marketplace/certified-operators-m57bq" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.363043 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d52ff5-e444-4d7d-952f-6d95888a7791-utilities\") pod \"certified-operators-m57bq\" (UID: \"e8d52ff5-e444-4d7d-952f-6d95888a7791\") " pod="openshift-marketplace/certified-operators-m57bq" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.369138 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-kube-api-access-fxhst" (OuterVolumeSpecName: "kube-api-access-fxhst") pod "7d3be6cc-22e0-4a96-9fc2-aed5fe092a53" (UID: "7d3be6cc-22e0-4a96-9fc2-aed5fe092a53"). InnerVolumeSpecName "kube-api-access-fxhst". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.370908 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7d3be6cc-22e0-4a96-9fc2-aed5fe092a53" (UID: "7d3be6cc-22e0-4a96-9fc2-aed5fe092a53"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.393821 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lxb5\" (UniqueName: \"kubernetes.io/projected/e8d52ff5-e444-4d7d-952f-6d95888a7791-kube-api-access-4lxb5\") pod \"certified-operators-m57bq\" (UID: \"e8d52ff5-e444-4d7d-952f-6d95888a7791\") " pod="openshift-marketplace/certified-operators-m57bq" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.396519 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t2p5d"] Mar 21 04:19:34 crc kubenswrapper[4923]: E0321 04:19:34.396708 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d3be6cc-22e0-4a96-9fc2-aed5fe092a53" containerName="controller-manager" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.396719 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3be6cc-22e0-4a96-9fc2-aed5fe092a53" containerName="controller-manager" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.396800 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d3be6cc-22e0-4a96-9fc2-aed5fe092a53" containerName="controller-manager" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.397373 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t2p5d" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.417372 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t2p5d"] Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.439098 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-blnd8" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.457620 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hq8bw"] Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.463747 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.463893 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0e9b7e6-79e7-47f0-a488-182df8bb166e-catalog-content\") pod \"community-operators-t2p5d\" (UID: \"d0e9b7e6-79e7-47f0-a488-182df8bb166e\") " pod="openshift-marketplace/community-operators-t2p5d" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.463951 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpt67\" (UniqueName: \"kubernetes.io/projected/d0e9b7e6-79e7-47f0-a488-182df8bb166e-kube-api-access-hpt67\") pod \"community-operators-t2p5d\" (UID: \"d0e9b7e6-79e7-47f0-a488-182df8bb166e\") " pod="openshift-marketplace/community-operators-t2p5d" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.463967 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0e9b7e6-79e7-47f0-a488-182df8bb166e-utilities\") pod \"community-operators-t2p5d\" (UID: \"d0e9b7e6-79e7-47f0-a488-182df8bb166e\") " pod="openshift-marketplace/community-operators-t2p5d" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.464062 4923 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-client-ca\") on node \"crc\" DevicePath \"\"" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.464074 4923 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.464081 4923 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.464090 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxhst\" (UniqueName: \"kubernetes.io/projected/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-kube-api-access-fxhst\") on node \"crc\" DevicePath \"\"" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.464098 4923 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:19:34 crc kubenswrapper[4923]: E0321 04:19:34.464164 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-21 04:19:34.964150462 +0000 UTC m=+140.117161549 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 21 04:19:34 crc kubenswrapper[4923]: W0321 04:19:34.470556 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dd7868c_7d3e_43de_b0f9_1ab51280fce5.slice/crio-a28fb4fd097cbd182866a26619d80ac9101811f1f0fa01c9335fe758b4fee0ec WatchSource:0}: Error finding container a28fb4fd097cbd182866a26619d80ac9101811f1f0fa01c9335fe758b4fee0ec: Status 404 returned error can't find the container with id a28fb4fd097cbd182866a26619d80ac9101811f1f0fa01c9335fe758b4fee0ec Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.522307 4923 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-03-21T04:19:34.08735959Z","Handler":null,"Name":""} Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.527584 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m57bq" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.529653 4923 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.529685 4923 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.567240 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpt67\" (UniqueName: \"kubernetes.io/projected/d0e9b7e6-79e7-47f0-a488-182df8bb166e-kube-api-access-hpt67\") pod \"community-operators-t2p5d\" (UID: \"d0e9b7e6-79e7-47f0-a488-182df8bb166e\") " pod="openshift-marketplace/community-operators-t2p5d" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.567282 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0e9b7e6-79e7-47f0-a488-182df8bb166e-utilities\") pod \"community-operators-t2p5d\" (UID: \"d0e9b7e6-79e7-47f0-a488-182df8bb166e\") " pod="openshift-marketplace/community-operators-t2p5d" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.567430 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.567462 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0e9b7e6-79e7-47f0-a488-182df8bb166e-catalog-content\") pod \"community-operators-t2p5d\" (UID: \"d0e9b7e6-79e7-47f0-a488-182df8bb166e\") " pod="openshift-marketplace/community-operators-t2p5d" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.567932 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0e9b7e6-79e7-47f0-a488-182df8bb166e-catalog-content\") pod \"community-operators-t2p5d\" (UID: \"d0e9b7e6-79e7-47f0-a488-182df8bb166e\") " pod="openshift-marketplace/community-operators-t2p5d" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.568601 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0e9b7e6-79e7-47f0-a488-182df8bb166e-utilities\") pod \"community-operators-t2p5d\" (UID: \"d0e9b7e6-79e7-47f0-a488-182df8bb166e\") " pod="openshift-marketplace/community-operators-t2p5d" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.585678 4923 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.585719 4923 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.594287 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpt67\" (UniqueName: \"kubernetes.io/projected/d0e9b7e6-79e7-47f0-a488-182df8bb166e-kube-api-access-hpt67\") pod \"community-operators-t2p5d\" (UID: \"d0e9b7e6-79e7-47f0-a488-182df8bb166e\") " pod="openshift-marketplace/community-operators-t2p5d" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.613193 4923 generic.go:334] "Generic (PLEG): container finished" podID="7d3be6cc-22e0-4a96-9fc2-aed5fe092a53" containerID="2c9a3967bbc4f3f29dcab9b6fb3f1cae8e338cbcb4730b2c92f83a0c4f210ae2" exitCode=0 Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.613339 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vwt2s" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.613477 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vwt2s" event={"ID":"7d3be6cc-22e0-4a96-9fc2-aed5fe092a53","Type":"ContainerDied","Data":"2c9a3967bbc4f3f29dcab9b6fb3f1cae8e338cbcb4730b2c92f83a0c4f210ae2"} Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.613525 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vwt2s" event={"ID":"7d3be6cc-22e0-4a96-9fc2-aed5fe092a53","Type":"ContainerDied","Data":"539dc637fb1a1ebe82ed3b0c0d96e90058ff912e433bd6dde3823845c3b18f9e"} Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.613546 4923 scope.go:117] "RemoveContainer" containerID="2c9a3967bbc4f3f29dcab9b6fb3f1cae8e338cbcb4730b2c92f83a0c4f210ae2" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.625991 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qgp9s" event={"ID":"8ae3d7d8-8466-4519-91c0-48c7230d1388","Type":"ContainerStarted","Data":"07bf009612b7ba48460e0a384e4dcb5d26568fa44fe9ea93cd5068f24264c578"} Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.626895 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9grdl"] Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.628446 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hq8bw" event={"ID":"3dd7868c-7d3e-43de-b0f9-1ab51280fce5","Type":"ContainerStarted","Data":"a28fb4fd097cbd182866a26619d80ac9101811f1f0fa01c9335fe758b4fee0ec"} Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.628555 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl" podUID="c5f8d415-601f-44f6-a5be-50c0e5c23826" containerName="route-controller-manager" containerID="cri-o://e6b9c6e9605807d038ed288afad866722fb7d17e9a60bcc501aca51530497e29" gracePeriod=30 Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.632628 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vwt2s"] Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.635440 4923 scope.go:117] "RemoveContainer" containerID="2c9a3967bbc4f3f29dcab9b6fb3f1cae8e338cbcb4730b2c92f83a0c4f210ae2" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.636198 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vwt2s"] Mar 21 04:19:34 crc kubenswrapper[4923]: E0321 04:19:34.637535 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c9a3967bbc4f3f29dcab9b6fb3f1cae8e338cbcb4730b2c92f83a0c4f210ae2\": container with ID starting with 2c9a3967bbc4f3f29dcab9b6fb3f1cae8e338cbcb4730b2c92f83a0c4f210ae2 not found: ID does not exist" containerID="2c9a3967bbc4f3f29dcab9b6fb3f1cae8e338cbcb4730b2c92f83a0c4f210ae2" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.637581 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c9a3967bbc4f3f29dcab9b6fb3f1cae8e338cbcb4730b2c92f83a0c4f210ae2"} err="failed to get container status \"2c9a3967bbc4f3f29dcab9b6fb3f1cae8e338cbcb4730b2c92f83a0c4f210ae2\": rpc error: code = NotFound desc = could not find container \"2c9a3967bbc4f3f29dcab9b6fb3f1cae8e338cbcb4730b2c92f83a0c4f210ae2\": container with ID starting with 2c9a3967bbc4f3f29dcab9b6fb3f1cae8e338cbcb4730b2c92f83a0c4f210ae2 not found: ID does not exist" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.638328 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kn7nz" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.651593 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bmnqq\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.667895 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.670162 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-qgp9s" podStartSLOduration=10.670150776 podStartE2EDuration="10.670150776s" podCreationTimestamp="2026-03-21 04:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:34.663831696 +0000 UTC m=+139.816842783" watchObservedRunningTime="2026-03-21 04:19:34.670150776 +0000 UTC m=+139.823161863" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.681613 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.721074 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t2p5d" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.798286 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m57bq"] Mar 21 04:19:34 crc kubenswrapper[4923]: W0321 04:19:34.804122 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8d52ff5_e444_4d7d_952f_6d95888a7791.slice/crio-61688def02468bb744888e24c27d3c6c835fa8d21087524a0debc82fdcaf5407 WatchSource:0}: Error finding container 61688def02468bb744888e24c27d3c6c835fa8d21087524a0debc82fdcaf5407: Status 404 returned error can't find the container with id 61688def02468bb744888e24c27d3c6c835fa8d21087524a0debc82fdcaf5407 Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.905651 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:34 crc kubenswrapper[4923]: I0321 04:19:34.952798 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t2p5d"] Mar 21 04:19:34 crc kubenswrapper[4923]: W0321 04:19:34.964218 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0e9b7e6_79e7_47f0_a488_182df8bb166e.slice/crio-65eb9c6c00a9a7afb7a1f6331700c86053349040e7f3be99a9cd901e4cca1c3f WatchSource:0}: Error finding container 65eb9c6c00a9a7afb7a1f6331700c86053349040e7f3be99a9cd901e4cca1c3f: Status 404 returned error can't find the container with id 65eb9c6c00a9a7afb7a1f6331700c86053349040e7f3be99a9cd901e4cca1c3f Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.054536 4923 ???:1] "http: TLS handshake error from 192.168.126.11:38484: no serving certificate available for the kubelet" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.073666 4923 patch_prober.go:28] interesting pod/router-default-5444994796-tbxjk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 21 04:19:35 crc kubenswrapper[4923]: [-]has-synced failed: reason withheld Mar 21 04:19:35 crc kubenswrapper[4923]: [+]process-running ok Mar 21 04:19:35 crc kubenswrapper[4923]: healthz check failed Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.073726 4923 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tbxjk" podUID="01bb7225-8917-44ec-894f-c2e237b1826e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.119544 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw"] Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.120390 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.123962 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.124047 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.124226 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.124292 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.124425 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.128890 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.134770 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw"] Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.135736 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.166518 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bmnqq"] Mar 21 04:19:35 crc kubenswrapper[4923]: W0321 04:19:35.173916 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2327f6c_aeb7_4fa3_bfe2_4a27dda3bad8.slice/crio-dc60f6ca6b6c2c569fb18fb3fef50a3bf0c0abebe699a49d89f7be74d3c01cb6 WatchSource:0}: Error finding container dc60f6ca6b6c2c569fb18fb3fef50a3bf0c0abebe699a49d89f7be74d3c01cb6: Status 404 returned error can't find the container with id dc60f6ca6b6c2c569fb18fb3fef50a3bf0c0abebe699a49d89f7be74d3c01cb6 Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.178909 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fjmv\" (UniqueName: \"kubernetes.io/projected/877384d0-10ed-4fca-a600-e8fa69a85648-kube-api-access-6fjmv\") pod \"controller-manager-86b7cf8fdb-k4qsw\" (UID: \"877384d0-10ed-4fca-a600-e8fa69a85648\") " pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.179086 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/877384d0-10ed-4fca-a600-e8fa69a85648-config\") pod \"controller-manager-86b7cf8fdb-k4qsw\" (UID: \"877384d0-10ed-4fca-a600-e8fa69a85648\") " pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.179208 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/877384d0-10ed-4fca-a600-e8fa69a85648-client-ca\") pod \"controller-manager-86b7cf8fdb-k4qsw\" (UID: \"877384d0-10ed-4fca-a600-e8fa69a85648\") " pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.179358 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/877384d0-10ed-4fca-a600-e8fa69a85648-proxy-ca-bundles\") pod \"controller-manager-86b7cf8fdb-k4qsw\" (UID: \"877384d0-10ed-4fca-a600-e8fa69a85648\") " pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.179481 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c807a2c9-347b-412f-ae48-0a1d03fefa10-metrics-certs\") pod \"network-metrics-daemon-rxwzv\" (UID: \"c807a2c9-347b-412f-ae48-0a1d03fefa10\") " pod="openshift-multus/network-metrics-daemon-rxwzv" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.179613 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/877384d0-10ed-4fca-a600-e8fa69a85648-serving-cert\") pod \"controller-manager-86b7cf8fdb-k4qsw\" (UID: \"877384d0-10ed-4fca-a600-e8fa69a85648\") " pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.194186 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c807a2c9-347b-412f-ae48-0a1d03fefa10-metrics-certs\") pod \"network-metrics-daemon-rxwzv\" (UID: \"c807a2c9-347b-412f-ae48-0a1d03fefa10\") " pod="openshift-multus/network-metrics-daemon-rxwzv" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.281128 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/877384d0-10ed-4fca-a600-e8fa69a85648-client-ca\") pod \"controller-manager-86b7cf8fdb-k4qsw\" (UID: \"877384d0-10ed-4fca-a600-e8fa69a85648\") " pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.281192 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/877384d0-10ed-4fca-a600-e8fa69a85648-proxy-ca-bundles\") pod \"controller-manager-86b7cf8fdb-k4qsw\" (UID: \"877384d0-10ed-4fca-a600-e8fa69a85648\") " pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.281264 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/877384d0-10ed-4fca-a600-e8fa69a85648-serving-cert\") pod \"controller-manager-86b7cf8fdb-k4qsw\" (UID: \"877384d0-10ed-4fca-a600-e8fa69a85648\") " pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.281338 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fjmv\" (UniqueName: \"kubernetes.io/projected/877384d0-10ed-4fca-a600-e8fa69a85648-kube-api-access-6fjmv\") pod \"controller-manager-86b7cf8fdb-k4qsw\" (UID: \"877384d0-10ed-4fca-a600-e8fa69a85648\") " pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.281383 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/877384d0-10ed-4fca-a600-e8fa69a85648-config\") pod \"controller-manager-86b7cf8fdb-k4qsw\" (UID: \"877384d0-10ed-4fca-a600-e8fa69a85648\") " pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.282174 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/877384d0-10ed-4fca-a600-e8fa69a85648-client-ca\") pod \"controller-manager-86b7cf8fdb-k4qsw\" (UID: \"877384d0-10ed-4fca-a600-e8fa69a85648\") " pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.282267 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/877384d0-10ed-4fca-a600-e8fa69a85648-proxy-ca-bundles\") pod \"controller-manager-86b7cf8fdb-k4qsw\" (UID: \"877384d0-10ed-4fca-a600-e8fa69a85648\") " pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.283059 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/877384d0-10ed-4fca-a600-e8fa69a85648-config\") pod \"controller-manager-86b7cf8fdb-k4qsw\" (UID: \"877384d0-10ed-4fca-a600-e8fa69a85648\") " pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.293172 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rxwzv" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.293548 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/877384d0-10ed-4fca-a600-e8fa69a85648-serving-cert\") pod \"controller-manager-86b7cf8fdb-k4qsw\" (UID: \"877384d0-10ed-4fca-a600-e8fa69a85648\") " pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.295820 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fjmv\" (UniqueName: \"kubernetes.io/projected/877384d0-10ed-4fca-a600-e8fa69a85648-kube-api-access-6fjmv\") pod \"controller-manager-86b7cf8fdb-k4qsw\" (UID: \"877384d0-10ed-4fca-a600-e8fa69a85648\") " pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.440117 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.612278 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.645342 4923 generic.go:334] "Generic (PLEG): container finished" podID="dd250302-91b0-41c4-b138-89559a78d375" containerID="afc7969cee3de90ae02cb5b465e8ea7e99e722f5f113c777e651812751ac833b" exitCode=0 Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.645424 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9grdl" event={"ID":"dd250302-91b0-41c4-b138-89559a78d375","Type":"ContainerDied","Data":"afc7969cee3de90ae02cb5b465e8ea7e99e722f5f113c777e651812751ac833b"} Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.645465 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9grdl" event={"ID":"dd250302-91b0-41c4-b138-89559a78d375","Type":"ContainerStarted","Data":"b07fecc856dcd3606b8e2e69a289615b35387edc534f536fce52145ed6546fb9"} Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.648080 4923 generic.go:334] "Generic (PLEG): container finished" podID="c5f8d415-601f-44f6-a5be-50c0e5c23826" containerID="e6b9c6e9605807d038ed288afad866722fb7d17e9a60bcc501aca51530497e29" exitCode=0 Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.648174 4923 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.648185 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.648194 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl" event={"ID":"c5f8d415-601f-44f6-a5be-50c0e5c23826","Type":"ContainerDied","Data":"e6b9c6e9605807d038ed288afad866722fb7d17e9a60bcc501aca51530497e29"} Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.648853 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl" event={"ID":"c5f8d415-601f-44f6-a5be-50c0e5c23826","Type":"ContainerDied","Data":"070d01393bc388b2fb2837d9ad544b71e7379529f8cb952b9c51d34e46559d0a"} Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.648887 4923 scope.go:117] "RemoveContainer" containerID="e6b9c6e9605807d038ed288afad866722fb7d17e9a60bcc501aca51530497e29" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.655985 4923 generic.go:334] "Generic (PLEG): container finished" podID="d0e9b7e6-79e7-47f0-a488-182df8bb166e" containerID="2055db5135fab7f76bfea1af811a0c8c593abf8278c2515782087ad1667eb621" exitCode=0 Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.656070 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t2p5d" event={"ID":"d0e9b7e6-79e7-47f0-a488-182df8bb166e","Type":"ContainerDied","Data":"2055db5135fab7f76bfea1af811a0c8c593abf8278c2515782087ad1667eb621"} Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.656096 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t2p5d" event={"ID":"d0e9b7e6-79e7-47f0-a488-182df8bb166e","Type":"ContainerStarted","Data":"65eb9c6c00a9a7afb7a1f6331700c86053349040e7f3be99a9cd901e4cca1c3f"} Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.671534 4923 generic.go:334] "Generic (PLEG): container finished" podID="e8d52ff5-e444-4d7d-952f-6d95888a7791" containerID="4b1a63022d3397bf3ed5e71f93af80fc215147df45e10a535460038e255f543b" exitCode=0 Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.671611 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m57bq" event={"ID":"e8d52ff5-e444-4d7d-952f-6d95888a7791","Type":"ContainerDied","Data":"4b1a63022d3397bf3ed5e71f93af80fc215147df45e10a535460038e255f543b"} Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.671636 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m57bq" event={"ID":"e8d52ff5-e444-4d7d-952f-6d95888a7791","Type":"ContainerStarted","Data":"61688def02468bb744888e24c27d3c6c835fa8d21087524a0debc82fdcaf5407"} Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.682115 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" event={"ID":"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8","Type":"ContainerStarted","Data":"9c7a4d29bc51132aec4fc84826c2b44cbd5c77e441512a2010cf079c420558d8"} Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.682160 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" event={"ID":"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8","Type":"ContainerStarted","Data":"dc60f6ca6b6c2c569fb18fb3fef50a3bf0c0abebe699a49d89f7be74d3c01cb6"} Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.682207 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.685714 4923 scope.go:117] "RemoveContainer" containerID="e6b9c6e9605807d038ed288afad866722fb7d17e9a60bcc501aca51530497e29" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.685930 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5f8d415-601f-44f6-a5be-50c0e5c23826-client-ca\") pod \"c5f8d415-601f-44f6-a5be-50c0e5c23826\" (UID: \"c5f8d415-601f-44f6-a5be-50c0e5c23826\") " Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.685968 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f8d415-601f-44f6-a5be-50c0e5c23826-serving-cert\") pod \"c5f8d415-601f-44f6-a5be-50c0e5c23826\" (UID: \"c5f8d415-601f-44f6-a5be-50c0e5c23826\") " Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.686058 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt2p7\" (UniqueName: \"kubernetes.io/projected/c5f8d415-601f-44f6-a5be-50c0e5c23826-kube-api-access-jt2p7\") pod \"c5f8d415-601f-44f6-a5be-50c0e5c23826\" (UID: \"c5f8d415-601f-44f6-a5be-50c0e5c23826\") " Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.686127 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f8d415-601f-44f6-a5be-50c0e5c23826-config\") pod \"c5f8d415-601f-44f6-a5be-50c0e5c23826\" (UID: \"c5f8d415-601f-44f6-a5be-50c0e5c23826\") " Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.688448 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f8d415-601f-44f6-a5be-50c0e5c23826-config" (OuterVolumeSpecName: "config") pod "c5f8d415-601f-44f6-a5be-50c0e5c23826" (UID: "c5f8d415-601f-44f6-a5be-50c0e5c23826"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:19:35 crc kubenswrapper[4923]: E0321 04:19:35.689034 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6b9c6e9605807d038ed288afad866722fb7d17e9a60bcc501aca51530497e29\": container with ID starting with e6b9c6e9605807d038ed288afad866722fb7d17e9a60bcc501aca51530497e29 not found: ID does not exist" containerID="e6b9c6e9605807d038ed288afad866722fb7d17e9a60bcc501aca51530497e29" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.689077 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6b9c6e9605807d038ed288afad866722fb7d17e9a60bcc501aca51530497e29"} err="failed to get container status \"e6b9c6e9605807d038ed288afad866722fb7d17e9a60bcc501aca51530497e29\": rpc error: code = NotFound desc = could not find container \"e6b9c6e9605807d038ed288afad866722fb7d17e9a60bcc501aca51530497e29\": container with ID starting with e6b9c6e9605807d038ed288afad866722fb7d17e9a60bcc501aca51530497e29 not found: ID does not exist" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.689559 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f8d415-601f-44f6-a5be-50c0e5c23826-client-ca" (OuterVolumeSpecName: "client-ca") pod "c5f8d415-601f-44f6-a5be-50c0e5c23826" (UID: "c5f8d415-601f-44f6-a5be-50c0e5c23826"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.689688 4923 generic.go:334] "Generic (PLEG): container finished" podID="3dd7868c-7d3e-43de-b0f9-1ab51280fce5" containerID="6a6e3fba6a0a96168089d35c06acf1d08ab6f417303ad15fae3028d6433f3f1d" exitCode=0 Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.689810 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hq8bw" event={"ID":"3dd7868c-7d3e-43de-b0f9-1ab51280fce5","Type":"ContainerDied","Data":"6a6e3fba6a0a96168089d35c06acf1d08ab6f417303ad15fae3028d6433f3f1d"} Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.698421 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f8d415-601f-44f6-a5be-50c0e5c23826-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f8d415-601f-44f6-a5be-50c0e5c23826" (UID: "c5f8d415-601f-44f6-a5be-50c0e5c23826"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.704360 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f8d415-601f-44f6-a5be-50c0e5c23826-kube-api-access-jt2p7" (OuterVolumeSpecName: "kube-api-access-jt2p7") pod "c5f8d415-601f-44f6-a5be-50c0e5c23826" (UID: "c5f8d415-601f-44f6-a5be-50c0e5c23826"). InnerVolumeSpecName "kube-api-access-jt2p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.714933 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-rxwzv"] Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.756834 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" podStartSLOduration=68.756814638 podStartE2EDuration="1m8.756814638s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:35.739249282 +0000 UTC m=+140.892260379" watchObservedRunningTime="2026-03-21 04:19:35.756814638 +0000 UTC m=+140.909825715" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.784008 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5twmg"] Mar 21 04:19:35 crc kubenswrapper[4923]: E0321 04:19:35.784227 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5f8d415-601f-44f6-a5be-50c0e5c23826" containerName="route-controller-manager" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.784239 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5f8d415-601f-44f6-a5be-50c0e5c23826" containerName="route-controller-manager" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.784856 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5f8d415-601f-44f6-a5be-50c0e5c23826" containerName="route-controller-manager" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.785524 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5twmg" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.787465 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.789264 4923 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5f8d415-601f-44f6-a5be-50c0e5c23826-client-ca\") on node \"crc\" DevicePath \"\"" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.789296 4923 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f8d415-601f-44f6-a5be-50c0e5c23826-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.789488 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt2p7\" (UniqueName: \"kubernetes.io/projected/c5f8d415-601f-44f6-a5be-50c0e5c23826-kube-api-access-jt2p7\") on node \"crc\" DevicePath \"\"" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.789502 4923 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f8d415-601f-44f6-a5be-50c0e5c23826-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.798241 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5twmg"] Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.876688 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw"] Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.890700 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b1d4e5a-6b46-4203-ba69-6440844e48ad-catalog-content\") pod \"redhat-marketplace-5twmg\" (UID: \"3b1d4e5a-6b46-4203-ba69-6440844e48ad\") " pod="openshift-marketplace/redhat-marketplace-5twmg" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.890762 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k86bc\" (UniqueName: \"kubernetes.io/projected/3b1d4e5a-6b46-4203-ba69-6440844e48ad-kube-api-access-k86bc\") pod \"redhat-marketplace-5twmg\" (UID: \"3b1d4e5a-6b46-4203-ba69-6440844e48ad\") " pod="openshift-marketplace/redhat-marketplace-5twmg" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.890822 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b1d4e5a-6b46-4203-ba69-6440844e48ad-utilities\") pod \"redhat-marketplace-5twmg\" (UID: \"3b1d4e5a-6b46-4203-ba69-6440844e48ad\") " pod="openshift-marketplace/redhat-marketplace-5twmg" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.993932 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b1d4e5a-6b46-4203-ba69-6440844e48ad-catalog-content\") pod \"redhat-marketplace-5twmg\" (UID: \"3b1d4e5a-6b46-4203-ba69-6440844e48ad\") " pod="openshift-marketplace/redhat-marketplace-5twmg" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.994068 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k86bc\" (UniqueName: \"kubernetes.io/projected/3b1d4e5a-6b46-4203-ba69-6440844e48ad-kube-api-access-k86bc\") pod \"redhat-marketplace-5twmg\" (UID: \"3b1d4e5a-6b46-4203-ba69-6440844e48ad\") " pod="openshift-marketplace/redhat-marketplace-5twmg" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.994178 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b1d4e5a-6b46-4203-ba69-6440844e48ad-utilities\") pod \"redhat-marketplace-5twmg\" (UID: \"3b1d4e5a-6b46-4203-ba69-6440844e48ad\") " pod="openshift-marketplace/redhat-marketplace-5twmg" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.994518 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b1d4e5a-6b46-4203-ba69-6440844e48ad-catalog-content\") pod \"redhat-marketplace-5twmg\" (UID: \"3b1d4e5a-6b46-4203-ba69-6440844e48ad\") " pod="openshift-marketplace/redhat-marketplace-5twmg" Mar 21 04:19:35 crc kubenswrapper[4923]: I0321 04:19:35.994977 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b1d4e5a-6b46-4203-ba69-6440844e48ad-utilities\") pod \"redhat-marketplace-5twmg\" (UID: \"3b1d4e5a-6b46-4203-ba69-6440844e48ad\") " pod="openshift-marketplace/redhat-marketplace-5twmg" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.018015 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k86bc\" (UniqueName: \"kubernetes.io/projected/3b1d4e5a-6b46-4203-ba69-6440844e48ad-kube-api-access-k86bc\") pod \"redhat-marketplace-5twmg\" (UID: \"3b1d4e5a-6b46-4203-ba69-6440844e48ad\") " pod="openshift-marketplace/redhat-marketplace-5twmg" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.078711 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl"] Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.080210 4923 patch_prober.go:28] interesting pod/router-default-5444994796-tbxjk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 21 04:19:36 crc kubenswrapper[4923]: [-]has-synced failed: reason withheld Mar 21 04:19:36 crc kubenswrapper[4923]: [+]process-running ok Mar 21 04:19:36 crc kubenswrapper[4923]: healthz check failed Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.080302 4923 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tbxjk" podUID="01bb7225-8917-44ec-894f-c2e237b1826e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.089920 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c67dl"] Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.094276 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.095089 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.098262 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.098606 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.101412 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.102619 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5twmg" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.187191 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hgbdf"] Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.188353 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hgbdf" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.196677 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e14e6aa-f23a-40f0-a8a3-10ee545ce37a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3e14e6aa-f23a-40f0-a8a3-10ee545ce37a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.196808 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e14e6aa-f23a-40f0-a8a3-10ee545ce37a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3e14e6aa-f23a-40f0-a8a3-10ee545ce37a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.206841 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hgbdf"] Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.298883 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85f7800b-eaa8-45bd-95b4-ee4885cadf52-utilities\") pod \"redhat-marketplace-hgbdf\" (UID: \"85f7800b-eaa8-45bd-95b4-ee4885cadf52\") " pod="openshift-marketplace/redhat-marketplace-hgbdf" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.298955 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85f7800b-eaa8-45bd-95b4-ee4885cadf52-catalog-content\") pod \"redhat-marketplace-hgbdf\" (UID: \"85f7800b-eaa8-45bd-95b4-ee4885cadf52\") " pod="openshift-marketplace/redhat-marketplace-hgbdf" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.298979 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e14e6aa-f23a-40f0-a8a3-10ee545ce37a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3e14e6aa-f23a-40f0-a8a3-10ee545ce37a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.299033 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zqmh\" (UniqueName: \"kubernetes.io/projected/85f7800b-eaa8-45bd-95b4-ee4885cadf52-kube-api-access-4zqmh\") pod \"redhat-marketplace-hgbdf\" (UID: \"85f7800b-eaa8-45bd-95b4-ee4885cadf52\") " pod="openshift-marketplace/redhat-marketplace-hgbdf" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.299054 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e14e6aa-f23a-40f0-a8a3-10ee545ce37a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3e14e6aa-f23a-40f0-a8a3-10ee545ce37a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.299111 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e14e6aa-f23a-40f0-a8a3-10ee545ce37a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3e14e6aa-f23a-40f0-a8a3-10ee545ce37a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.329841 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e14e6aa-f23a-40f0-a8a3-10ee545ce37a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3e14e6aa-f23a-40f0-a8a3-10ee545ce37a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.388803 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d3be6cc-22e0-4a96-9fc2-aed5fe092a53" path="/var/lib/kubelet/pods/7d3be6cc-22e0-4a96-9fc2-aed5fe092a53/volumes" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.389586 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.390128 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f8d415-601f-44f6-a5be-50c0e5c23826" path="/var/lib/kubelet/pods/c5f8d415-601f-44f6-a5be-50c0e5c23826/volumes" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.400924 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85f7800b-eaa8-45bd-95b4-ee4885cadf52-catalog-content\") pod \"redhat-marketplace-hgbdf\" (UID: \"85f7800b-eaa8-45bd-95b4-ee4885cadf52\") " pod="openshift-marketplace/redhat-marketplace-hgbdf" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.401316 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85f7800b-eaa8-45bd-95b4-ee4885cadf52-catalog-content\") pod \"redhat-marketplace-hgbdf\" (UID: \"85f7800b-eaa8-45bd-95b4-ee4885cadf52\") " pod="openshift-marketplace/redhat-marketplace-hgbdf" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.401450 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zqmh\" (UniqueName: \"kubernetes.io/projected/85f7800b-eaa8-45bd-95b4-ee4885cadf52-kube-api-access-4zqmh\") pod \"redhat-marketplace-hgbdf\" (UID: \"85f7800b-eaa8-45bd-95b4-ee4885cadf52\") " pod="openshift-marketplace/redhat-marketplace-hgbdf" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.401504 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85f7800b-eaa8-45bd-95b4-ee4885cadf52-utilities\") pod \"redhat-marketplace-hgbdf\" (UID: \"85f7800b-eaa8-45bd-95b4-ee4885cadf52\") " pod="openshift-marketplace/redhat-marketplace-hgbdf" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.401749 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85f7800b-eaa8-45bd-95b4-ee4885cadf52-utilities\") pod \"redhat-marketplace-hgbdf\" (UID: \"85f7800b-eaa8-45bd-95b4-ee4885cadf52\") " pod="openshift-marketplace/redhat-marketplace-hgbdf" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.416717 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.431583 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zqmh\" (UniqueName: \"kubernetes.io/projected/85f7800b-eaa8-45bd-95b4-ee4885cadf52-kube-api-access-4zqmh\") pod \"redhat-marketplace-hgbdf\" (UID: \"85f7800b-eaa8-45bd-95b4-ee4885cadf52\") " pod="openshift-marketplace/redhat-marketplace-hgbdf" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.508673 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hgbdf" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.632194 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5twmg"] Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.752748 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" event={"ID":"877384d0-10ed-4fca-a600-e8fa69a85648","Type":"ContainerStarted","Data":"f00f458b15e7059fe70a9624cd81e2727ee2c4d05a2359db981b734c316117e5"} Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.752811 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" event={"ID":"877384d0-10ed-4fca-a600-e8fa69a85648","Type":"ContainerStarted","Data":"b882137c5a69cec2f2f95566c76913259d102883bf8cd17dd977c2455ad24c0d"} Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.753276 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.768600 4923 generic.go:334] "Generic (PLEG): container finished" podID="b717fc18-bfdb-4e99-8fc0-ea7c905dd908" containerID="ba07bece2567240ec8839146b3df4b0bbe589617fa9bfdf6e666c968feec1bc1" exitCode=0 Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.768744 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567775-vxpk6" event={"ID":"b717fc18-bfdb-4e99-8fc0-ea7c905dd908","Type":"ContainerDied","Data":"ba07bece2567240ec8839146b3df4b0bbe589617fa9bfdf6e666c968feec1bc1"} Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.775420 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.790198 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rxwzv" event={"ID":"c807a2c9-347b-412f-ae48-0a1d03fefa10","Type":"ContainerStarted","Data":"db5d779a9697e5a6838fdbcc60b8cdfd0d7174fb870d19b2df5ac882a04bf87e"} Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.790238 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rxwzv" event={"ID":"c807a2c9-347b-412f-ae48-0a1d03fefa10","Type":"ContainerStarted","Data":"fae3f8bd19d3bbb5c605492396dc736bcec83ca3d0be4036621fc8663516a11e"} Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.790249 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rxwzv" event={"ID":"c807a2c9-347b-412f-ae48-0a1d03fefa10","Type":"ContainerStarted","Data":"79bb410e0dae281060d40830563cd4f550f4909978e528b879418f29dde57177"} Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.805666 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" podStartSLOduration=1.805650028 podStartE2EDuration="1.805650028s" podCreationTimestamp="2026-03-21 04:19:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:36.784589558 +0000 UTC m=+141.937600645" watchObservedRunningTime="2026-03-21 04:19:36.805650028 +0000 UTC m=+141.958661115" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.854309 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-rxwzv" podStartSLOduration=69.854292524 podStartE2EDuration="1m9.854292524s" podCreationTimestamp="2026-03-21 04:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:36.852425878 +0000 UTC m=+142.005436965" watchObservedRunningTime="2026-03-21 04:19:36.854292524 +0000 UTC m=+142.007303611" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.871523 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.883814 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz"] Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.884949 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.888735 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.889154 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.889401 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.889599 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.889751 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.889947 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.895104 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.895829 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.898452 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.898637 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.944777 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz"] Mar 21 04:19:36 crc kubenswrapper[4923]: I0321 04:19:36.961338 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.026935 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4b3225e-19a4-40c0-b9fe-b93566ed64e9-client-ca\") pod \"route-controller-manager-6b6c5b7fb5-7jfzz\" (UID: \"a4b3225e-19a4-40c0-b9fe-b93566ed64e9\") " pod="openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.027089 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4b3225e-19a4-40c0-b9fe-b93566ed64e9-serving-cert\") pod \"route-controller-manager-6b6c5b7fb5-7jfzz\" (UID: \"a4b3225e-19a4-40c0-b9fe-b93566ed64e9\") " pod="openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.027165 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxkjc\" (UniqueName: \"kubernetes.io/projected/a4b3225e-19a4-40c0-b9fe-b93566ed64e9-kube-api-access-qxkjc\") pod \"route-controller-manager-6b6c5b7fb5-7jfzz\" (UID: \"a4b3225e-19a4-40c0-b9fe-b93566ed64e9\") " pod="openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.027191 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b3225e-19a4-40c0-b9fe-b93566ed64e9-config\") pod \"route-controller-manager-6b6c5b7fb5-7jfzz\" (UID: \"a4b3225e-19a4-40c0-b9fe-b93566ed64e9\") " pod="openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.077063 4923 patch_prober.go:28] interesting pod/router-default-5444994796-tbxjk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 21 04:19:37 crc kubenswrapper[4923]: [-]has-synced failed: reason withheld Mar 21 04:19:37 crc kubenswrapper[4923]: [+]process-running ok Mar 21 04:19:37 crc kubenswrapper[4923]: healthz check failed Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.077546 4923 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tbxjk" podUID="01bb7225-8917-44ec-894f-c2e237b1826e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.128083 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4b3225e-19a4-40c0-b9fe-b93566ed64e9-serving-cert\") pod \"route-controller-manager-6b6c5b7fb5-7jfzz\" (UID: \"a4b3225e-19a4-40c0-b9fe-b93566ed64e9\") " pod="openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.128127 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3a53f400-4470-4aef-8ff2-33331220a390-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3a53f400-4470-4aef-8ff2-33331220a390\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.128164 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxkjc\" (UniqueName: \"kubernetes.io/projected/a4b3225e-19a4-40c0-b9fe-b93566ed64e9-kube-api-access-qxkjc\") pod \"route-controller-manager-6b6c5b7fb5-7jfzz\" (UID: \"a4b3225e-19a4-40c0-b9fe-b93566ed64e9\") " pod="openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.128185 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b3225e-19a4-40c0-b9fe-b93566ed64e9-config\") pod \"route-controller-manager-6b6c5b7fb5-7jfzz\" (UID: \"a4b3225e-19a4-40c0-b9fe-b93566ed64e9\") " pod="openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.128226 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4b3225e-19a4-40c0-b9fe-b93566ed64e9-client-ca\") pod \"route-controller-manager-6b6c5b7fb5-7jfzz\" (UID: \"a4b3225e-19a4-40c0-b9fe-b93566ed64e9\") " pod="openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.128286 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a53f400-4470-4aef-8ff2-33331220a390-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3a53f400-4470-4aef-8ff2-33331220a390\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.129034 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4b3225e-19a4-40c0-b9fe-b93566ed64e9-client-ca\") pod \"route-controller-manager-6b6c5b7fb5-7jfzz\" (UID: \"a4b3225e-19a4-40c0-b9fe-b93566ed64e9\") " pod="openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.130463 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b3225e-19a4-40c0-b9fe-b93566ed64e9-config\") pod \"route-controller-manager-6b6c5b7fb5-7jfzz\" (UID: \"a4b3225e-19a4-40c0-b9fe-b93566ed64e9\") " pod="openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.135575 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4b3225e-19a4-40c0-b9fe-b93566ed64e9-serving-cert\") pod \"route-controller-manager-6b6c5b7fb5-7jfzz\" (UID: \"a4b3225e-19a4-40c0-b9fe-b93566ed64e9\") " pod="openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.160122 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxkjc\" (UniqueName: \"kubernetes.io/projected/a4b3225e-19a4-40c0-b9fe-b93566ed64e9-kube-api-access-qxkjc\") pod \"route-controller-manager-6b6c5b7fb5-7jfzz\" (UID: \"a4b3225e-19a4-40c0-b9fe-b93566ed64e9\") " pod="openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.199296 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lcp5h"] Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.200792 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lcp5h" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.207521 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.214493 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hgbdf"] Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.214539 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lcp5h"] Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.230076 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a53f400-4470-4aef-8ff2-33331220a390-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3a53f400-4470-4aef-8ff2-33331220a390\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.234445 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3a53f400-4470-4aef-8ff2-33331220a390-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3a53f400-4470-4aef-8ff2-33331220a390\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.234591 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3a53f400-4470-4aef-8ff2-33331220a390-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3a53f400-4470-4aef-8ff2-33331220a390\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.245398 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.271248 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a53f400-4470-4aef-8ff2-33331220a390-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3a53f400-4470-4aef-8ff2-33331220a390\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 21 04:19:37 crc kubenswrapper[4923]: W0321 04:19:37.278453 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85f7800b_eaa8_45bd_95b4_ee4885cadf52.slice/crio-530eae2f61f161bca3218ff62027a80d8e83e9e8d3a64e5916ac4f8d9017cea5 WatchSource:0}: Error finding container 530eae2f61f161bca3218ff62027a80d8e83e9e8d3a64e5916ac4f8d9017cea5: Status 404 returned error can't find the container with id 530eae2f61f161bca3218ff62027a80d8e83e9e8d3a64e5916ac4f8d9017cea5 Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.288213 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.298475 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-5bhvd" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.318246 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.336136 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvsd4\" (UniqueName: \"kubernetes.io/projected/ded4d513-cc92-405c-8009-913f9aa7ea5f-kube-api-access-zvsd4\") pod \"redhat-operators-lcp5h\" (UID: \"ded4d513-cc92-405c-8009-913f9aa7ea5f\") " pod="openshift-marketplace/redhat-operators-lcp5h" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.336186 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ded4d513-cc92-405c-8009-913f9aa7ea5f-catalog-content\") pod \"redhat-operators-lcp5h\" (UID: \"ded4d513-cc92-405c-8009-913f9aa7ea5f\") " pod="openshift-marketplace/redhat-operators-lcp5h" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.336241 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ded4d513-cc92-405c-8009-913f9aa7ea5f-utilities\") pod \"redhat-operators-lcp5h\" (UID: \"ded4d513-cc92-405c-8009-913f9aa7ea5f\") " pod="openshift-marketplace/redhat-operators-lcp5h" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.380257 4923 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hqpc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.380616 4923 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2hqpc" podUID="3f1486ed-5a65-43a1-9a45-c02318d4d831" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.380392 4923 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hqpc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.380867 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hqpc" podUID="3f1486ed-5a65-43a1-9a45-c02318d4d831" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.398419 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.439907 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvsd4\" (UniqueName: \"kubernetes.io/projected/ded4d513-cc92-405c-8009-913f9aa7ea5f-kube-api-access-zvsd4\") pod \"redhat-operators-lcp5h\" (UID: \"ded4d513-cc92-405c-8009-913f9aa7ea5f\") " pod="openshift-marketplace/redhat-operators-lcp5h" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.439971 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ded4d513-cc92-405c-8009-913f9aa7ea5f-catalog-content\") pod \"redhat-operators-lcp5h\" (UID: \"ded4d513-cc92-405c-8009-913f9aa7ea5f\") " pod="openshift-marketplace/redhat-operators-lcp5h" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.440049 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ded4d513-cc92-405c-8009-913f9aa7ea5f-utilities\") pod \"redhat-operators-lcp5h\" (UID: \"ded4d513-cc92-405c-8009-913f9aa7ea5f\") " pod="openshift-marketplace/redhat-operators-lcp5h" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.441832 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ded4d513-cc92-405c-8009-913f9aa7ea5f-catalog-content\") pod \"redhat-operators-lcp5h\" (UID: \"ded4d513-cc92-405c-8009-913f9aa7ea5f\") " pod="openshift-marketplace/redhat-operators-lcp5h" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.442605 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ded4d513-cc92-405c-8009-913f9aa7ea5f-utilities\") pod \"redhat-operators-lcp5h\" (UID: \"ded4d513-cc92-405c-8009-913f9aa7ea5f\") " pod="openshift-marketplace/redhat-operators-lcp5h" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.482395 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvsd4\" (UniqueName: \"kubernetes.io/projected/ded4d513-cc92-405c-8009-913f9aa7ea5f-kube-api-access-zvsd4\") pod \"redhat-operators-lcp5h\" (UID: \"ded4d513-cc92-405c-8009-913f9aa7ea5f\") " pod="openshift-marketplace/redhat-operators-lcp5h" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.519596 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-8bxdt" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.523255 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qv8h5" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.546537 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lcp5h" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.557715 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-p69c7" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.557747 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-p69c7" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.567944 4923 patch_prober.go:28] interesting pod/console-f9d7485db-p69c7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.38:8443/health\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.567986 4923 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-p69c7" podUID="bba19ac5-eeb6-4536-93c2-22f110e6ce8a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.38:8443/health\": dial tcp 10.217.0.38:8443: connect: connection refused" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.602792 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-k2q79" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.606041 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v7tdq"] Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.608315 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v7tdq" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.620302 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v7tdq"] Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.665732 4923 ???:1] "http: TLS handshake error from 192.168.126.11:38494: no serving certificate available for the kubelet" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.745291 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44513839-d352-4720-a552-bc11c6030391-catalog-content\") pod \"redhat-operators-v7tdq\" (UID: \"44513839-d352-4720-a552-bc11c6030391\") " pod="openshift-marketplace/redhat-operators-v7tdq" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.745721 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j8wc\" (UniqueName: \"kubernetes.io/projected/44513839-d352-4720-a552-bc11c6030391-kube-api-access-6j8wc\") pod \"redhat-operators-v7tdq\" (UID: \"44513839-d352-4720-a552-bc11c6030391\") " pod="openshift-marketplace/redhat-operators-v7tdq" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.745746 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44513839-d352-4720-a552-bc11c6030391-utilities\") pod \"redhat-operators-v7tdq\" (UID: \"44513839-d352-4720-a552-bc11c6030391\") " pod="openshift-marketplace/redhat-operators-v7tdq" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.781225 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 21 04:19:37 crc kubenswrapper[4923]: W0321 04:19:37.813108 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3a53f400_4470_4aef_8ff2_33331220a390.slice/crio-a5f7a12626173a2ae8d958ac665e8a6b1402ed44089eeb3cbd6e8d8117fd3dd0 WatchSource:0}: Error finding container a5f7a12626173a2ae8d958ac665e8a6b1402ed44089eeb3cbd6e8d8117fd3dd0: Status 404 returned error can't find the container with id a5f7a12626173a2ae8d958ac665e8a6b1402ed44089eeb3cbd6e8d8117fd3dd0 Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.818992 4923 generic.go:334] "Generic (PLEG): container finished" podID="85f7800b-eaa8-45bd-95b4-ee4885cadf52" containerID="da55dd6610e5bbbc4340d264c50c5066c84dbe6d1a1c4de5449c7a1505212792" exitCode=0 Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.819053 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hgbdf" event={"ID":"85f7800b-eaa8-45bd-95b4-ee4885cadf52","Type":"ContainerDied","Data":"da55dd6610e5bbbc4340d264c50c5066c84dbe6d1a1c4de5449c7a1505212792"} Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.819118 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hgbdf" event={"ID":"85f7800b-eaa8-45bd-95b4-ee4885cadf52","Type":"ContainerStarted","Data":"530eae2f61f161bca3218ff62027a80d8e83e9e8d3a64e5916ac4f8d9017cea5"} Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.845632 4923 generic.go:334] "Generic (PLEG): container finished" podID="3b1d4e5a-6b46-4203-ba69-6440844e48ad" containerID="82a16abf8cc9ff812ad09d93671ed8ffd3d126a2db4bdf55ef0464ef39b507e2" exitCode=0 Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.845729 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5twmg" event={"ID":"3b1d4e5a-6b46-4203-ba69-6440844e48ad","Type":"ContainerDied","Data":"82a16abf8cc9ff812ad09d93671ed8ffd3d126a2db4bdf55ef0464ef39b507e2"} Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.845758 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5twmg" event={"ID":"3b1d4e5a-6b46-4203-ba69-6440844e48ad","Type":"ContainerStarted","Data":"9e64ad2a3528079f52d762c3aed972254347ceaaf0f9c36c4391a196d37f54a5"} Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.848661 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j8wc\" (UniqueName: \"kubernetes.io/projected/44513839-d352-4720-a552-bc11c6030391-kube-api-access-6j8wc\") pod \"redhat-operators-v7tdq\" (UID: \"44513839-d352-4720-a552-bc11c6030391\") " pod="openshift-marketplace/redhat-operators-v7tdq" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.848700 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44513839-d352-4720-a552-bc11c6030391-utilities\") pod \"redhat-operators-v7tdq\" (UID: \"44513839-d352-4720-a552-bc11c6030391\") " pod="openshift-marketplace/redhat-operators-v7tdq" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.848773 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44513839-d352-4720-a552-bc11c6030391-catalog-content\") pod \"redhat-operators-v7tdq\" (UID: \"44513839-d352-4720-a552-bc11c6030391\") " pod="openshift-marketplace/redhat-operators-v7tdq" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.855429 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3e14e6aa-f23a-40f0-a8a3-10ee545ce37a","Type":"ContainerStarted","Data":"a94fc7362ec80656c887a912fdeba923e2a507b20dfdb48aae32d5fd868ae536"} Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.856531 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44513839-d352-4720-a552-bc11c6030391-catalog-content\") pod \"redhat-operators-v7tdq\" (UID: \"44513839-d352-4720-a552-bc11c6030391\") " pod="openshift-marketplace/redhat-operators-v7tdq" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.856762 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44513839-d352-4720-a552-bc11c6030391-utilities\") pod \"redhat-operators-v7tdq\" (UID: \"44513839-d352-4720-a552-bc11c6030391\") " pod="openshift-marketplace/redhat-operators-v7tdq" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.875724 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j8wc\" (UniqueName: \"kubernetes.io/projected/44513839-d352-4720-a552-bc11c6030391-kube-api-access-6j8wc\") pod \"redhat-operators-v7tdq\" (UID: \"44513839-d352-4720-a552-bc11c6030391\") " pod="openshift-marketplace/redhat-operators-v7tdq" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.908264 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-zljcw" Mar 21 04:19:37 crc kubenswrapper[4923]: I0321 04:19:37.945179 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v7tdq" Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.018684 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz"] Mar 21 04:19:38 crc kubenswrapper[4923]: E0321 04:19:38.028371 4923 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e01b2d2b82733974aed179fba8278886fccd815af9d5a326585915dbb6ba2916" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 21 04:19:38 crc kubenswrapper[4923]: E0321 04:19:38.030431 4923 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e01b2d2b82733974aed179fba8278886fccd815af9d5a326585915dbb6ba2916" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 21 04:19:38 crc kubenswrapper[4923]: E0321 04:19:38.032202 4923 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e01b2d2b82733974aed179fba8278886fccd815af9d5a326585915dbb6ba2916" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 21 04:19:38 crc kubenswrapper[4923]: E0321 04:19:38.032273 4923 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-l24r9" podUID="6d274204-1cbf-4028-8cc7-ae94ad474006" containerName="kube-multus-additional-cni-plugins" Mar 21 04:19:38 crc kubenswrapper[4923]: W0321 04:19:38.044491 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4b3225e_19a4_40c0_b9fe_b93566ed64e9.slice/crio-87ff01973b9f70af4dd832ebf4e246b2264c017f99436a984a95802c6516eff8 WatchSource:0}: Error finding container 87ff01973b9f70af4dd832ebf4e246b2264c017f99436a984a95802c6516eff8: Status 404 returned error can't find the container with id 87ff01973b9f70af4dd832ebf4e246b2264c017f99436a984a95802c6516eff8 Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.068944 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-tbxjk" Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.078209 4923 patch_prober.go:28] interesting pod/router-default-5444994796-tbxjk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 21 04:19:38 crc kubenswrapper[4923]: [-]has-synced failed: reason withheld Mar 21 04:19:38 crc kubenswrapper[4923]: [+]process-running ok Mar 21 04:19:38 crc kubenswrapper[4923]: healthz check failed Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.078281 4923 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tbxjk" podUID="01bb7225-8917-44ec-894f-c2e237b1826e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.216880 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567775-vxpk6" Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.248553 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lcp5h"] Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.291861 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v7tdq"] Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.356399 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwg8p\" (UniqueName: \"kubernetes.io/projected/b717fc18-bfdb-4e99-8fc0-ea7c905dd908-kube-api-access-zwg8p\") pod \"b717fc18-bfdb-4e99-8fc0-ea7c905dd908\" (UID: \"b717fc18-bfdb-4e99-8fc0-ea7c905dd908\") " Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.356498 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b717fc18-bfdb-4e99-8fc0-ea7c905dd908-config-volume\") pod \"b717fc18-bfdb-4e99-8fc0-ea7c905dd908\" (UID: \"b717fc18-bfdb-4e99-8fc0-ea7c905dd908\") " Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.356592 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b717fc18-bfdb-4e99-8fc0-ea7c905dd908-secret-volume\") pod \"b717fc18-bfdb-4e99-8fc0-ea7c905dd908\" (UID: \"b717fc18-bfdb-4e99-8fc0-ea7c905dd908\") " Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.369720 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b717fc18-bfdb-4e99-8fc0-ea7c905dd908-config-volume" (OuterVolumeSpecName: "config-volume") pod "b717fc18-bfdb-4e99-8fc0-ea7c905dd908" (UID: "b717fc18-bfdb-4e99-8fc0-ea7c905dd908"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.372173 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b717fc18-bfdb-4e99-8fc0-ea7c905dd908-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b717fc18-bfdb-4e99-8fc0-ea7c905dd908" (UID: "b717fc18-bfdb-4e99-8fc0-ea7c905dd908"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.372666 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b717fc18-bfdb-4e99-8fc0-ea7c905dd908-kube-api-access-zwg8p" (OuterVolumeSpecName: "kube-api-access-zwg8p") pod "b717fc18-bfdb-4e99-8fc0-ea7c905dd908" (UID: "b717fc18-bfdb-4e99-8fc0-ea7c905dd908"). InnerVolumeSpecName "kube-api-access-zwg8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.461004 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwg8p\" (UniqueName: \"kubernetes.io/projected/b717fc18-bfdb-4e99-8fc0-ea7c905dd908-kube-api-access-zwg8p\") on node \"crc\" DevicePath \"\"" Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.461031 4923 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b717fc18-bfdb-4e99-8fc0-ea7c905dd908-config-volume\") on node \"crc\" DevicePath \"\"" Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.461040 4923 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b717fc18-bfdb-4e99-8fc0-ea7c905dd908-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.881174 4923 generic.go:334] "Generic (PLEG): container finished" podID="44513839-d352-4720-a552-bc11c6030391" containerID="9a5dcb0b681fb55dc6a3dfd385784602c8151eaad32048242b377ef4ceb5844e" exitCode=0 Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.881253 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v7tdq" event={"ID":"44513839-d352-4720-a552-bc11c6030391","Type":"ContainerDied","Data":"9a5dcb0b681fb55dc6a3dfd385784602c8151eaad32048242b377ef4ceb5844e"} Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.881298 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v7tdq" event={"ID":"44513839-d352-4720-a552-bc11c6030391","Type":"ContainerStarted","Data":"7f89a8eed38723e994c831923b0f527a6bf50c59c3cadd34e4284f4019de20be"} Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.882911 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567775-vxpk6" event={"ID":"b717fc18-bfdb-4e99-8fc0-ea7c905dd908","Type":"ContainerDied","Data":"1528bc9dde3561114603d744e61b80829bdc2500d14be2620ed4d63e90525b91"} Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.882931 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1528bc9dde3561114603d744e61b80829bdc2500d14be2620ed4d63e90525b91" Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.882980 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567775-vxpk6" Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.894484 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz" event={"ID":"a4b3225e-19a4-40c0-b9fe-b93566ed64e9","Type":"ContainerStarted","Data":"7935c95971714504f5c3af5f486f9b77ae8c83d72793850f565aa187d2b3971e"} Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.894520 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz" event={"ID":"a4b3225e-19a4-40c0-b9fe-b93566ed64e9","Type":"ContainerStarted","Data":"87ff01973b9f70af4dd832ebf4e246b2264c017f99436a984a95802c6516eff8"} Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.895450 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz" Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.902353 4923 generic.go:334] "Generic (PLEG): container finished" podID="3e14e6aa-f23a-40f0-a8a3-10ee545ce37a" containerID="8bd766ca5ca75550c6f1eb64f232da5c81b150f438e246c845bb233d5b32d6c3" exitCode=0 Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.902604 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3e14e6aa-f23a-40f0-a8a3-10ee545ce37a","Type":"ContainerDied","Data":"8bd766ca5ca75550c6f1eb64f232da5c81b150f438e246c845bb233d5b32d6c3"} Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.906449 4923 generic.go:334] "Generic (PLEG): container finished" podID="ded4d513-cc92-405c-8009-913f9aa7ea5f" containerID="753417c5ee736e82cd9e5bd479968da5a5adeb965edb948c56b4a536080fe8cd" exitCode=0 Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.906527 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lcp5h" event={"ID":"ded4d513-cc92-405c-8009-913f9aa7ea5f","Type":"ContainerDied","Data":"753417c5ee736e82cd9e5bd479968da5a5adeb965edb948c56b4a536080fe8cd"} Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.906545 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lcp5h" event={"ID":"ded4d513-cc92-405c-8009-913f9aa7ea5f","Type":"ContainerStarted","Data":"db39b272dc807a9978c9652e86064a47413271258a3c3da147e31dc565414414"} Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.913210 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3a53f400-4470-4aef-8ff2-33331220a390","Type":"ContainerStarted","Data":"db716043cfb3a110a3219697ecac6efc72d58861ca59e3f9b66ef4fdc85206ef"} Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.913231 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3a53f400-4470-4aef-8ff2-33331220a390","Type":"ContainerStarted","Data":"a5f7a12626173a2ae8d958ac665e8a6b1402ed44089eeb3cbd6e8d8117fd3dd0"} Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.918652 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz" podStartSLOduration=3.918642197 podStartE2EDuration="3.918642197s" podCreationTimestamp="2026-03-21 04:19:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:38.916849003 +0000 UTC m=+144.069860090" watchObservedRunningTime="2026-03-21 04:19:38.918642197 +0000 UTC m=+144.071653284" Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.942475 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz" Mar 21 04:19:38 crc kubenswrapper[4923]: I0321 04:19:38.977404 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.977384484 podStartE2EDuration="2.977384484s" podCreationTimestamp="2026-03-21 04:19:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:19:38.965293483 +0000 UTC m=+144.118304570" watchObservedRunningTime="2026-03-21 04:19:38.977384484 +0000 UTC m=+144.130395571" Mar 21 04:19:39 crc kubenswrapper[4923]: I0321 04:19:39.074200 4923 patch_prober.go:28] interesting pod/router-default-5444994796-tbxjk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 21 04:19:39 crc kubenswrapper[4923]: [-]has-synced failed: reason withheld Mar 21 04:19:39 crc kubenswrapper[4923]: [+]process-running ok Mar 21 04:19:39 crc kubenswrapper[4923]: healthz check failed Mar 21 04:19:39 crc kubenswrapper[4923]: I0321 04:19:39.074244 4923 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tbxjk" podUID="01bb7225-8917-44ec-894f-c2e237b1826e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 04:19:39 crc kubenswrapper[4923]: I0321 04:19:39.933708 4923 generic.go:334] "Generic (PLEG): container finished" podID="3a53f400-4470-4aef-8ff2-33331220a390" containerID="db716043cfb3a110a3219697ecac6efc72d58861ca59e3f9b66ef4fdc85206ef" exitCode=0 Mar 21 04:19:39 crc kubenswrapper[4923]: I0321 04:19:39.933851 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3a53f400-4470-4aef-8ff2-33331220a390","Type":"ContainerDied","Data":"db716043cfb3a110a3219697ecac6efc72d58861ca59e3f9b66ef4fdc85206ef"} Mar 21 04:19:40 crc kubenswrapper[4923]: I0321 04:19:40.033922 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-9czdb" Mar 21 04:19:40 crc kubenswrapper[4923]: I0321 04:19:40.072531 4923 patch_prober.go:28] interesting pod/router-default-5444994796-tbxjk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 21 04:19:40 crc kubenswrapper[4923]: [-]has-synced failed: reason withheld Mar 21 04:19:40 crc kubenswrapper[4923]: [+]process-running ok Mar 21 04:19:40 crc kubenswrapper[4923]: healthz check failed Mar 21 04:19:40 crc kubenswrapper[4923]: I0321 04:19:40.072601 4923 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tbxjk" podUID="01bb7225-8917-44ec-894f-c2e237b1826e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 04:19:40 crc kubenswrapper[4923]: I0321 04:19:40.293494 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 21 04:19:40 crc kubenswrapper[4923]: I0321 04:19:40.393717 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e14e6aa-f23a-40f0-a8a3-10ee545ce37a-kubelet-dir\") pod \"3e14e6aa-f23a-40f0-a8a3-10ee545ce37a\" (UID: \"3e14e6aa-f23a-40f0-a8a3-10ee545ce37a\") " Mar 21 04:19:40 crc kubenswrapper[4923]: I0321 04:19:40.393804 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e14e6aa-f23a-40f0-a8a3-10ee545ce37a-kube-api-access\") pod \"3e14e6aa-f23a-40f0-a8a3-10ee545ce37a\" (UID: \"3e14e6aa-f23a-40f0-a8a3-10ee545ce37a\") " Mar 21 04:19:40 crc kubenswrapper[4923]: I0321 04:19:40.394899 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e14e6aa-f23a-40f0-a8a3-10ee545ce37a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3e14e6aa-f23a-40f0-a8a3-10ee545ce37a" (UID: "3e14e6aa-f23a-40f0-a8a3-10ee545ce37a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:19:40 crc kubenswrapper[4923]: I0321 04:19:40.400160 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e14e6aa-f23a-40f0-a8a3-10ee545ce37a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3e14e6aa-f23a-40f0-a8a3-10ee545ce37a" (UID: "3e14e6aa-f23a-40f0-a8a3-10ee545ce37a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:19:40 crc kubenswrapper[4923]: I0321 04:19:40.496280 4923 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e14e6aa-f23a-40f0-a8a3-10ee545ce37a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 21 04:19:40 crc kubenswrapper[4923]: I0321 04:19:40.496315 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e14e6aa-f23a-40f0-a8a3-10ee545ce37a-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 21 04:19:40 crc kubenswrapper[4923]: I0321 04:19:40.951685 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 21 04:19:40 crc kubenswrapper[4923]: I0321 04:19:40.951767 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3e14e6aa-f23a-40f0-a8a3-10ee545ce37a","Type":"ContainerDied","Data":"a94fc7362ec80656c887a912fdeba923e2a507b20dfdb48aae32d5fd868ae536"} Mar 21 04:19:40 crc kubenswrapper[4923]: I0321 04:19:40.951798 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a94fc7362ec80656c887a912fdeba923e2a507b20dfdb48aae32d5fd868ae536" Mar 21 04:19:41 crc kubenswrapper[4923]: I0321 04:19:41.087776 4923 patch_prober.go:28] interesting pod/router-default-5444994796-tbxjk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 21 04:19:41 crc kubenswrapper[4923]: [-]has-synced failed: reason withheld Mar 21 04:19:41 crc kubenswrapper[4923]: [+]process-running ok Mar 21 04:19:41 crc kubenswrapper[4923]: healthz check failed Mar 21 04:19:41 crc kubenswrapper[4923]: I0321 04:19:41.087823 4923 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tbxjk" podUID="01bb7225-8917-44ec-894f-c2e237b1826e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 04:19:41 crc kubenswrapper[4923]: I0321 04:19:41.254306 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 21 04:19:41 crc kubenswrapper[4923]: I0321 04:19:41.408732 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3a53f400-4470-4aef-8ff2-33331220a390-kubelet-dir\") pod \"3a53f400-4470-4aef-8ff2-33331220a390\" (UID: \"3a53f400-4470-4aef-8ff2-33331220a390\") " Mar 21 04:19:41 crc kubenswrapper[4923]: I0321 04:19:41.408816 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a53f400-4470-4aef-8ff2-33331220a390-kube-api-access\") pod \"3a53f400-4470-4aef-8ff2-33331220a390\" (UID: \"3a53f400-4470-4aef-8ff2-33331220a390\") " Mar 21 04:19:41 crc kubenswrapper[4923]: I0321 04:19:41.409380 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a53f400-4470-4aef-8ff2-33331220a390-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3a53f400-4470-4aef-8ff2-33331220a390" (UID: "3a53f400-4470-4aef-8ff2-33331220a390"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:19:41 crc kubenswrapper[4923]: I0321 04:19:41.413838 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a53f400-4470-4aef-8ff2-33331220a390-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3a53f400-4470-4aef-8ff2-33331220a390" (UID: "3a53f400-4470-4aef-8ff2-33331220a390"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:19:41 crc kubenswrapper[4923]: I0321 04:19:41.510124 4923 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3a53f400-4470-4aef-8ff2-33331220a390-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 21 04:19:41 crc kubenswrapper[4923]: I0321 04:19:41.510141 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a53f400-4470-4aef-8ff2-33331220a390-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 21 04:19:41 crc kubenswrapper[4923]: I0321 04:19:41.980590 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3a53f400-4470-4aef-8ff2-33331220a390","Type":"ContainerDied","Data":"a5f7a12626173a2ae8d958ac665e8a6b1402ed44089eeb3cbd6e8d8117fd3dd0"} Mar 21 04:19:41 crc kubenswrapper[4923]: I0321 04:19:41.980624 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5f7a12626173a2ae8d958ac665e8a6b1402ed44089eeb3cbd6e8d8117fd3dd0" Mar 21 04:19:41 crc kubenswrapper[4923]: I0321 04:19:41.980680 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 21 04:19:42 crc kubenswrapper[4923]: I0321 04:19:42.071937 4923 patch_prober.go:28] interesting pod/router-default-5444994796-tbxjk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 21 04:19:42 crc kubenswrapper[4923]: [+]has-synced ok Mar 21 04:19:42 crc kubenswrapper[4923]: [+]process-running ok Mar 21 04:19:42 crc kubenswrapper[4923]: healthz check failed Mar 21 04:19:42 crc kubenswrapper[4923]: I0321 04:19:42.072013 4923 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tbxjk" podUID="01bb7225-8917-44ec-894f-c2e237b1826e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 21 04:19:42 crc kubenswrapper[4923]: I0321 04:19:42.353195 4923 ???:1] "http: TLS handshake error from 192.168.126.11:58124: no serving certificate available for the kubelet" Mar 21 04:19:42 crc kubenswrapper[4923]: I0321 04:19:42.812225 4923 ???:1] "http: TLS handshake error from 192.168.126.11:58126: no serving certificate available for the kubelet" Mar 21 04:19:43 crc kubenswrapper[4923]: I0321 04:19:43.070750 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-tbxjk" Mar 21 04:19:43 crc kubenswrapper[4923]: I0321 04:19:43.072841 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-tbxjk" Mar 21 04:19:47 crc kubenswrapper[4923]: I0321 04:19:47.380136 4923 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hqpc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Mar 21 04:19:47 crc kubenswrapper[4923]: I0321 04:19:47.380460 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hqpc" podUID="3f1486ed-5a65-43a1-9a45-c02318d4d831" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Mar 21 04:19:47 crc kubenswrapper[4923]: I0321 04:19:47.380231 4923 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hqpc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Mar 21 04:19:47 crc kubenswrapper[4923]: I0321 04:19:47.380736 4923 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2hqpc" podUID="3f1486ed-5a65-43a1-9a45-c02318d4d831" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.37:8080/\": dial tcp 10.217.0.37:8080: connect: connection refused" Mar 21 04:19:47 crc kubenswrapper[4923]: I0321 04:19:47.557471 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-p69c7" Mar 21 04:19:47 crc kubenswrapper[4923]: I0321 04:19:47.562594 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-p69c7" Mar 21 04:19:48 crc kubenswrapper[4923]: E0321 04:19:48.028950 4923 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e01b2d2b82733974aed179fba8278886fccd815af9d5a326585915dbb6ba2916" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 21 04:19:48 crc kubenswrapper[4923]: E0321 04:19:48.031045 4923 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e01b2d2b82733974aed179fba8278886fccd815af9d5a326585915dbb6ba2916" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 21 04:19:48 crc kubenswrapper[4923]: E0321 04:19:48.033602 4923 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e01b2d2b82733974aed179fba8278886fccd815af9d5a326585915dbb6ba2916" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 21 04:19:48 crc kubenswrapper[4923]: E0321 04:19:48.033660 4923 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-l24r9" podUID="6d274204-1cbf-4028-8cc7-ae94ad474006" containerName="kube-multus-additional-cni-plugins" Mar 21 04:19:52 crc kubenswrapper[4923]: I0321 04:19:52.758582 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw"] Mar 21 04:19:52 crc kubenswrapper[4923]: I0321 04:19:52.759435 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" podUID="877384d0-10ed-4fca-a600-e8fa69a85648" containerName="controller-manager" containerID="cri-o://f00f458b15e7059fe70a9624cd81e2727ee2c4d05a2359db981b734c316117e5" gracePeriod=30 Mar 21 04:19:52 crc kubenswrapper[4923]: I0321 04:19:52.769761 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz"] Mar 21 04:19:52 crc kubenswrapper[4923]: I0321 04:19:52.769998 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz" podUID="a4b3225e-19a4-40c0-b9fe-b93566ed64e9" containerName="route-controller-manager" containerID="cri-o://7935c95971714504f5c3af5f486f9b77ae8c83d72793850f565aa187d2b3971e" gracePeriod=30 Mar 21 04:19:53 crc kubenswrapper[4923]: I0321 04:19:53.210608 4923 generic.go:334] "Generic (PLEG): container finished" podID="877384d0-10ed-4fca-a600-e8fa69a85648" containerID="f00f458b15e7059fe70a9624cd81e2727ee2c4d05a2359db981b734c316117e5" exitCode=0 Mar 21 04:19:53 crc kubenswrapper[4923]: I0321 04:19:53.210735 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" event={"ID":"877384d0-10ed-4fca-a600-e8fa69a85648","Type":"ContainerDied","Data":"f00f458b15e7059fe70a9624cd81e2727ee2c4d05a2359db981b734c316117e5"} Mar 21 04:19:53 crc kubenswrapper[4923]: I0321 04:19:53.212375 4923 generic.go:334] "Generic (PLEG): container finished" podID="a4b3225e-19a4-40c0-b9fe-b93566ed64e9" containerID="7935c95971714504f5c3af5f486f9b77ae8c83d72793850f565aa187d2b3971e" exitCode=0 Mar 21 04:19:53 crc kubenswrapper[4923]: I0321 04:19:53.212419 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz" event={"ID":"a4b3225e-19a4-40c0-b9fe-b93566ed64e9","Type":"ContainerDied","Data":"7935c95971714504f5c3af5f486f9b77ae8c83d72793850f565aa187d2b3971e"} Mar 21 04:19:53 crc kubenswrapper[4923]: I0321 04:19:53.232153 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:19:53 crc kubenswrapper[4923]: I0321 04:19:53.232197 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:19:53 crc kubenswrapper[4923]: I0321 04:19:53.233862 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:19:53 crc kubenswrapper[4923]: I0321 04:19:53.239016 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:19:53 crc kubenswrapper[4923]: I0321 04:19:53.322042 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 21 04:19:53 crc kubenswrapper[4923]: I0321 04:19:53.333058 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 04:19:53 crc kubenswrapper[4923]: I0321 04:19:53.333153 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:19:53 crc kubenswrapper[4923]: I0321 04:19:53.337765 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:19:53 crc kubenswrapper[4923]: I0321 04:19:53.338420 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 04:19:53 crc kubenswrapper[4923]: I0321 04:19:53.603801 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 21 04:19:53 crc kubenswrapper[4923]: I0321 04:19:53.632343 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:19:54 crc kubenswrapper[4923]: I0321 04:19:54.921602 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:19:55 crc kubenswrapper[4923]: I0321 04:19:55.441936 4923 patch_prober.go:28] interesting pod/controller-manager-86b7cf8fdb-k4qsw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.48:8443/healthz\": dial tcp 10.217.0.48:8443: connect: connection refused" start-of-body= Mar 21 04:19:55 crc kubenswrapper[4923]: I0321 04:19:55.442005 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" podUID="877384d0-10ed-4fca-a600-e8fa69a85648" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.48:8443/healthz\": dial tcp 10.217.0.48:8443: connect: connection refused" Mar 21 04:19:57 crc kubenswrapper[4923]: I0321 04:19:57.246699 4923 patch_prober.go:28] interesting pod/route-controller-manager-6b6c5b7fb5-7jfzz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.52:8443/healthz\": dial tcp 10.217.0.52:8443: connect: connection refused" start-of-body= Mar 21 04:19:57 crc kubenswrapper[4923]: I0321 04:19:57.247196 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz" podUID="a4b3225e-19a4-40c0-b9fe-b93566ed64e9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.52:8443/healthz\": dial tcp 10.217.0.52:8443: connect: connection refused" Mar 21 04:19:57 crc kubenswrapper[4923]: I0321 04:19:57.384882 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-2hqpc" Mar 21 04:19:58 crc kubenswrapper[4923]: E0321 04:19:58.029366 4923 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e01b2d2b82733974aed179fba8278886fccd815af9d5a326585915dbb6ba2916" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 21 04:19:58 crc kubenswrapper[4923]: E0321 04:19:58.030753 4923 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e01b2d2b82733974aed179fba8278886fccd815af9d5a326585915dbb6ba2916" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 21 04:19:58 crc kubenswrapper[4923]: E0321 04:19:58.032679 4923 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e01b2d2b82733974aed179fba8278886fccd815af9d5a326585915dbb6ba2916" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 21 04:19:58 crc kubenswrapper[4923]: E0321 04:19:58.032727 4923 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-l24r9" podUID="6d274204-1cbf-4028-8cc7-ae94ad474006" containerName="kube-multus-additional-cni-plugins" Mar 21 04:20:00 crc kubenswrapper[4923]: I0321 04:20:00.127778 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567780-l4jnj"] Mar 21 04:20:00 crc kubenswrapper[4923]: E0321 04:20:00.128041 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a53f400-4470-4aef-8ff2-33331220a390" containerName="pruner" Mar 21 04:20:00 crc kubenswrapper[4923]: I0321 04:20:00.128059 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a53f400-4470-4aef-8ff2-33331220a390" containerName="pruner" Mar 21 04:20:00 crc kubenswrapper[4923]: E0321 04:20:00.128071 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b717fc18-bfdb-4e99-8fc0-ea7c905dd908" containerName="collect-profiles" Mar 21 04:20:00 crc kubenswrapper[4923]: I0321 04:20:00.128079 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="b717fc18-bfdb-4e99-8fc0-ea7c905dd908" containerName="collect-profiles" Mar 21 04:20:00 crc kubenswrapper[4923]: E0321 04:20:00.128123 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e14e6aa-f23a-40f0-a8a3-10ee545ce37a" containerName="pruner" Mar 21 04:20:00 crc kubenswrapper[4923]: I0321 04:20:00.128134 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e14e6aa-f23a-40f0-a8a3-10ee545ce37a" containerName="pruner" Mar 21 04:20:00 crc kubenswrapper[4923]: I0321 04:20:00.128484 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a53f400-4470-4aef-8ff2-33331220a390" containerName="pruner" Mar 21 04:20:00 crc kubenswrapper[4923]: I0321 04:20:00.128520 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="b717fc18-bfdb-4e99-8fc0-ea7c905dd908" containerName="collect-profiles" Mar 21 04:20:00 crc kubenswrapper[4923]: I0321 04:20:00.128537 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e14e6aa-f23a-40f0-a8a3-10ee545ce37a" containerName="pruner" Mar 21 04:20:00 crc kubenswrapper[4923]: I0321 04:20:00.129046 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567780-l4jnj" Mar 21 04:20:00 crc kubenswrapper[4923]: I0321 04:20:00.132259 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567780-l4jnj"] Mar 21 04:20:00 crc kubenswrapper[4923]: I0321 04:20:00.137818 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 04:20:00 crc kubenswrapper[4923]: I0321 04:20:00.138435 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 04:20:00 crc kubenswrapper[4923]: I0321 04:20:00.138880 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8trfl" Mar 21 04:20:00 crc kubenswrapper[4923]: I0321 04:20:00.259555 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrv4t\" (UniqueName: \"kubernetes.io/projected/dd8587a2-6b9d-46d9-aa09-c726997f9681-kube-api-access-zrv4t\") pod \"auto-csr-approver-29567780-l4jnj\" (UID: \"dd8587a2-6b9d-46d9-aa09-c726997f9681\") " pod="openshift-infra/auto-csr-approver-29567780-l4jnj" Mar 21 04:20:00 crc kubenswrapper[4923]: I0321 04:20:00.361471 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrv4t\" (UniqueName: \"kubernetes.io/projected/dd8587a2-6b9d-46d9-aa09-c726997f9681-kube-api-access-zrv4t\") pod \"auto-csr-approver-29567780-l4jnj\" (UID: \"dd8587a2-6b9d-46d9-aa09-c726997f9681\") " pod="openshift-infra/auto-csr-approver-29567780-l4jnj" Mar 21 04:20:00 crc kubenswrapper[4923]: I0321 04:20:00.381304 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrv4t\" (UniqueName: \"kubernetes.io/projected/dd8587a2-6b9d-46d9-aa09-c726997f9681-kube-api-access-zrv4t\") pod \"auto-csr-approver-29567780-l4jnj\" (UID: \"dd8587a2-6b9d-46d9-aa09-c726997f9681\") " pod="openshift-infra/auto-csr-approver-29567780-l4jnj" Mar 21 04:20:00 crc kubenswrapper[4923]: I0321 04:20:00.460938 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567780-l4jnj" Mar 21 04:20:03 crc kubenswrapper[4923]: I0321 04:20:03.385963 4923 ???:1] "http: TLS handshake error from 192.168.126.11:43008: no serving certificate available for the kubelet" Mar 21 04:20:05 crc kubenswrapper[4923]: I0321 04:20:05.375128 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 21 04:20:05 crc kubenswrapper[4923]: I0321 04:20:05.442118 4923 patch_prober.go:28] interesting pod/controller-manager-86b7cf8fdb-k4qsw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.48:8443/healthz\": dial tcp 10.217.0.48:8443: connect: connection refused" start-of-body= Mar 21 04:20:05 crc kubenswrapper[4923]: I0321 04:20:05.442194 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" podUID="877384d0-10ed-4fca-a600-e8fa69a85648" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.48:8443/healthz\": dial tcp 10.217.0.48:8443: connect: connection refused" Mar 21 04:20:06 crc kubenswrapper[4923]: I0321 04:20:06.403673 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=1.403642712 podStartE2EDuration="1.403642712s" podCreationTimestamp="2026-03-21 04:20:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:20:06.388446338 +0000 UTC m=+171.541457465" watchObservedRunningTime="2026-03-21 04:20:06.403642712 +0000 UTC m=+171.556653839" Mar 21 04:20:08 crc kubenswrapper[4923]: E0321 04:20:08.026704 4923 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e01b2d2b82733974aed179fba8278886fccd815af9d5a326585915dbb6ba2916 is running failed: container process not found" containerID="e01b2d2b82733974aed179fba8278886fccd815af9d5a326585915dbb6ba2916" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 21 04:20:08 crc kubenswrapper[4923]: E0321 04:20:08.027509 4923 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e01b2d2b82733974aed179fba8278886fccd815af9d5a326585915dbb6ba2916 is running failed: container process not found" containerID="e01b2d2b82733974aed179fba8278886fccd815af9d5a326585915dbb6ba2916" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 21 04:20:08 crc kubenswrapper[4923]: E0321 04:20:08.028076 4923 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e01b2d2b82733974aed179fba8278886fccd815af9d5a326585915dbb6ba2916 is running failed: container process not found" containerID="e01b2d2b82733974aed179fba8278886fccd815af9d5a326585915dbb6ba2916" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 21 04:20:08 crc kubenswrapper[4923]: E0321 04:20:08.028139 4923 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e01b2d2b82733974aed179fba8278886fccd815af9d5a326585915dbb6ba2916 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-l24r9" podUID="6d274204-1cbf-4028-8cc7-ae94ad474006" containerName="kube-multus-additional-cni-plugins" Mar 21 04:20:08 crc kubenswrapper[4923]: I0321 04:20:08.246293 4923 patch_prober.go:28] interesting pod/route-controller-manager-6b6c5b7fb5-7jfzz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 21 04:20:08 crc kubenswrapper[4923]: I0321 04:20:08.246452 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz" podUID="a4b3225e-19a4-40c0-b9fe-b93566ed64e9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 21 04:20:08 crc kubenswrapper[4923]: I0321 04:20:08.287740 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cf5k2" Mar 21 04:20:09 crc kubenswrapper[4923]: I0321 04:20:09.306577 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-l24r9_6d274204-1cbf-4028-8cc7-ae94ad474006/kube-multus-additional-cni-plugins/0.log" Mar 21 04:20:09 crc kubenswrapper[4923]: I0321 04:20:09.306620 4923 generic.go:334] "Generic (PLEG): container finished" podID="6d274204-1cbf-4028-8cc7-ae94ad474006" containerID="e01b2d2b82733974aed179fba8278886fccd815af9d5a326585915dbb6ba2916" exitCode=137 Mar 21 04:20:09 crc kubenswrapper[4923]: I0321 04:20:09.306646 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-l24r9" event={"ID":"6d274204-1cbf-4028-8cc7-ae94ad474006","Type":"ContainerDied","Data":"e01b2d2b82733974aed179fba8278886fccd815af9d5a326585915dbb6ba2916"} Mar 21 04:20:10 crc kubenswrapper[4923]: I0321 04:20:10.860053 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 21 04:20:10 crc kubenswrapper[4923]: I0321 04:20:10.861451 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 21 04:20:10 crc kubenswrapper[4923]: I0321 04:20:10.863770 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 21 04:20:10 crc kubenswrapper[4923]: I0321 04:20:10.863798 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 21 04:20:10 crc kubenswrapper[4923]: I0321 04:20:10.874606 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 21 04:20:11 crc kubenswrapper[4923]: I0321 04:20:11.015198 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f1583f3-0853-456e-8ea6-8019561c68dd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7f1583f3-0853-456e-8ea6-8019561c68dd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 21 04:20:11 crc kubenswrapper[4923]: I0321 04:20:11.015253 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f1583f3-0853-456e-8ea6-8019561c68dd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7f1583f3-0853-456e-8ea6-8019561c68dd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 21 04:20:11 crc kubenswrapper[4923]: I0321 04:20:11.116749 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f1583f3-0853-456e-8ea6-8019561c68dd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7f1583f3-0853-456e-8ea6-8019561c68dd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 21 04:20:11 crc kubenswrapper[4923]: I0321 04:20:11.116794 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f1583f3-0853-456e-8ea6-8019561c68dd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7f1583f3-0853-456e-8ea6-8019561c68dd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 21 04:20:11 crc kubenswrapper[4923]: I0321 04:20:11.116912 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f1583f3-0853-456e-8ea6-8019561c68dd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7f1583f3-0853-456e-8ea6-8019561c68dd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 21 04:20:11 crc kubenswrapper[4923]: I0321 04:20:11.137344 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f1583f3-0853-456e-8ea6-8019561c68dd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7f1583f3-0853-456e-8ea6-8019561c68dd\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 21 04:20:11 crc kubenswrapper[4923]: I0321 04:20:11.179286 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 21 04:20:11 crc kubenswrapper[4923]: E0321 04:20:11.781782 4923 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 21 04:20:11 crc kubenswrapper[4923]: E0321 04:20:11.782239 4923 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6j8wc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-v7tdq_openshift-marketplace(44513839-d352-4720-a552-bc11c6030391): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 21 04:20:11 crc kubenswrapper[4923]: E0321 04:20:11.783464 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-v7tdq" podUID="44513839-d352-4720-a552-bc11c6030391" Mar 21 04:20:13 crc kubenswrapper[4923]: E0321 04:20:13.424842 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-v7tdq" podUID="44513839-d352-4720-a552-bc11c6030391" Mar 21 04:20:13 crc kubenswrapper[4923]: E0321 04:20:13.508468 4923 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 21 04:20:13 crc kubenswrapper[4923]: E0321 04:20:13.508642 4923 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8zbhj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-hq8bw_openshift-marketplace(3dd7868c-7d3e-43de-b0f9-1ab51280fce5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 21 04:20:13 crc kubenswrapper[4923]: E0321 04:20:13.509824 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-hq8bw" podUID="3dd7868c-7d3e-43de-b0f9-1ab51280fce5" Mar 21 04:20:13 crc kubenswrapper[4923]: E0321 04:20:13.510920 4923 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 21 04:20:13 crc kubenswrapper[4923]: E0321 04:20:13.510988 4923 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zvsd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-lcp5h_openshift-marketplace(ded4d513-cc92-405c-8009-913f9aa7ea5f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 21 04:20:13 crc kubenswrapper[4923]: E0321 04:20:13.512670 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-lcp5h" podUID="ded4d513-cc92-405c-8009-913f9aa7ea5f" Mar 21 04:20:14 crc kubenswrapper[4923]: E0321 04:20:14.811459 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-hq8bw" podUID="3dd7868c-7d3e-43de-b0f9-1ab51280fce5" Mar 21 04:20:14 crc kubenswrapper[4923]: E0321 04:20:14.812093 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-lcp5h" podUID="ded4d513-cc92-405c-8009-913f9aa7ea5f" Mar 21 04:20:14 crc kubenswrapper[4923]: E0321 04:20:14.878071 4923 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 21 04:20:14 crc kubenswrapper[4923]: E0321 04:20:14.878203 4923 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hpt67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-t2p5d_openshift-marketplace(d0e9b7e6-79e7-47f0-a488-182df8bb166e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 21 04:20:14 crc kubenswrapper[4923]: E0321 04:20:14.879441 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-t2p5d" podUID="d0e9b7e6-79e7-47f0-a488-182df8bb166e" Mar 21 04:20:14 crc kubenswrapper[4923]: E0321 04:20:14.903564 4923 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 21 04:20:14 crc kubenswrapper[4923]: E0321 04:20:14.903689 4923 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4lxb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-m57bq_openshift-marketplace(e8d52ff5-e444-4d7d-952f-6d95888a7791): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 21 04:20:14 crc kubenswrapper[4923]: E0321 04:20:14.904985 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-m57bq" podUID="e8d52ff5-e444-4d7d-952f-6d95888a7791" Mar 21 04:20:15 crc kubenswrapper[4923]: I0321 04:20:15.456850 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 21 04:20:15 crc kubenswrapper[4923]: I0321 04:20:15.468939 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 21 04:20:15 crc kubenswrapper[4923]: I0321 04:20:15.469080 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 21 04:20:15 crc kubenswrapper[4923]: I0321 04:20:15.570824 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d694cb6-70ab-4c6b-98ee-9aa819980356-var-lock\") pod \"installer-9-crc\" (UID: \"3d694cb6-70ab-4c6b-98ee-9aa819980356\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 21 04:20:15 crc kubenswrapper[4923]: I0321 04:20:15.570958 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d694cb6-70ab-4c6b-98ee-9aa819980356-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3d694cb6-70ab-4c6b-98ee-9aa819980356\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 21 04:20:15 crc kubenswrapper[4923]: I0321 04:20:15.571003 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d694cb6-70ab-4c6b-98ee-9aa819980356-kube-api-access\") pod \"installer-9-crc\" (UID: \"3d694cb6-70ab-4c6b-98ee-9aa819980356\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 21 04:20:15 crc kubenswrapper[4923]: I0321 04:20:15.672214 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d694cb6-70ab-4c6b-98ee-9aa819980356-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3d694cb6-70ab-4c6b-98ee-9aa819980356\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 21 04:20:15 crc kubenswrapper[4923]: I0321 04:20:15.672350 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d694cb6-70ab-4c6b-98ee-9aa819980356-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3d694cb6-70ab-4c6b-98ee-9aa819980356\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 21 04:20:15 crc kubenswrapper[4923]: I0321 04:20:15.672358 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d694cb6-70ab-4c6b-98ee-9aa819980356-kube-api-access\") pod \"installer-9-crc\" (UID: \"3d694cb6-70ab-4c6b-98ee-9aa819980356\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 21 04:20:15 crc kubenswrapper[4923]: I0321 04:20:15.672463 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d694cb6-70ab-4c6b-98ee-9aa819980356-var-lock\") pod \"installer-9-crc\" (UID: \"3d694cb6-70ab-4c6b-98ee-9aa819980356\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 21 04:20:15 crc kubenswrapper[4923]: I0321 04:20:15.672645 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d694cb6-70ab-4c6b-98ee-9aa819980356-var-lock\") pod \"installer-9-crc\" (UID: \"3d694cb6-70ab-4c6b-98ee-9aa819980356\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 21 04:20:15 crc kubenswrapper[4923]: I0321 04:20:15.697891 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d694cb6-70ab-4c6b-98ee-9aa819980356-kube-api-access\") pod \"installer-9-crc\" (UID: \"3d694cb6-70ab-4c6b-98ee-9aa819980356\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 21 04:20:15 crc kubenswrapper[4923]: I0321 04:20:15.789386 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 21 04:20:16 crc kubenswrapper[4923]: E0321 04:20:16.111445 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-t2p5d" podUID="d0e9b7e6-79e7-47f0-a488-182df8bb166e" Mar 21 04:20:16 crc kubenswrapper[4923]: E0321 04:20:16.111486 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m57bq" podUID="e8d52ff5-e444-4d7d-952f-6d95888a7791" Mar 21 04:20:16 crc kubenswrapper[4923]: E0321 04:20:16.180891 4923 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 21 04:20:16 crc kubenswrapper[4923]: E0321 04:20:16.181066 4923 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k86bc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-5twmg_openshift-marketplace(3b1d4e5a-6b46-4203-ba69-6440844e48ad): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 21 04:20:16 crc kubenswrapper[4923]: E0321 04:20:16.183372 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-5twmg" podUID="3b1d4e5a-6b46-4203-ba69-6440844e48ad" Mar 21 04:20:16 crc kubenswrapper[4923]: E0321 04:20:16.213681 4923 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 21 04:20:16 crc kubenswrapper[4923]: E0321 04:20:16.214137 4923 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4z6bp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9grdl_openshift-marketplace(dd250302-91b0-41c4-b138-89559a78d375): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 21 04:20:16 crc kubenswrapper[4923]: E0321 04:20:16.217371 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-9grdl" podUID="dd250302-91b0-41c4-b138-89559a78d375" Mar 21 04:20:16 crc kubenswrapper[4923]: E0321 04:20:16.218843 4923 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 21 04:20:16 crc kubenswrapper[4923]: E0321 04:20:16.218937 4923 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zqmh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-hgbdf_openshift-marketplace(85f7800b-eaa8-45bd-95b4-ee4885cadf52): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 21 04:20:16 crc kubenswrapper[4923]: E0321 04:20:16.220067 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-hgbdf" podUID="85f7800b-eaa8-45bd-95b4-ee4885cadf52" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.242950 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.243280 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.250917 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-l24r9_6d274204-1cbf-4028-8cc7-ae94ad474006/kube-multus-additional-cni-plugins/0.log" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.250973 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-l24r9" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.309804 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7d4b746f48-r5bn4"] Mar 21 04:20:16 crc kubenswrapper[4923]: E0321 04:20:16.310605 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d274204-1cbf-4028-8cc7-ae94ad474006" containerName="kube-multus-additional-cni-plugins" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.310689 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d274204-1cbf-4028-8cc7-ae94ad474006" containerName="kube-multus-additional-cni-plugins" Mar 21 04:20:16 crc kubenswrapper[4923]: E0321 04:20:16.310752 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4b3225e-19a4-40c0-b9fe-b93566ed64e9" containerName="route-controller-manager" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.310812 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b3225e-19a4-40c0-b9fe-b93566ed64e9" containerName="route-controller-manager" Mar 21 04:20:16 crc kubenswrapper[4923]: E0321 04:20:16.310876 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="877384d0-10ed-4fca-a600-e8fa69a85648" containerName="controller-manager" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.310932 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="877384d0-10ed-4fca-a600-e8fa69a85648" containerName="controller-manager" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.311198 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="877384d0-10ed-4fca-a600-e8fa69a85648" containerName="controller-manager" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.311271 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4b3225e-19a4-40c0-b9fe-b93566ed64e9" containerName="route-controller-manager" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.311355 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d274204-1cbf-4028-8cc7-ae94ad474006" containerName="kube-multus-additional-cni-plugins" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.312001 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d4b746f48-r5bn4" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.324463 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d4b746f48-r5bn4"] Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.344482 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" event={"ID":"877384d0-10ed-4fca-a600-e8fa69a85648","Type":"ContainerDied","Data":"b882137c5a69cec2f2f95566c76913259d102883bf8cd17dd977c2455ad24c0d"} Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.344544 4923 scope.go:117] "RemoveContainer" containerID="f00f458b15e7059fe70a9624cd81e2727ee2c4d05a2359db981b734c316117e5" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.344498 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.347156 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-l24r9_6d274204-1cbf-4028-8cc7-ae94ad474006/kube-multus-additional-cni-plugins/0.log" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.347298 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-l24r9" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.347291 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-l24r9" event={"ID":"6d274204-1cbf-4028-8cc7-ae94ad474006","Type":"ContainerDied","Data":"7fbf7c6d5c3057bc98f21b2167ed422e96ba8922452a50c2bd14db423cd134b6"} Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.350786 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz" event={"ID":"a4b3225e-19a4-40c0-b9fe-b93566ed64e9","Type":"ContainerDied","Data":"87ff01973b9f70af4dd832ebf4e246b2264c017f99436a984a95802c6516eff8"} Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.350857 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz" Mar 21 04:20:16 crc kubenswrapper[4923]: E0321 04:20:16.355330 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-hgbdf" podUID="85f7800b-eaa8-45bd-95b4-ee4885cadf52" Mar 21 04:20:16 crc kubenswrapper[4923]: E0321 04:20:16.355577 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-5twmg" podUID="3b1d4e5a-6b46-4203-ba69-6440844e48ad" Mar 21 04:20:16 crc kubenswrapper[4923]: E0321 04:20:16.359228 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9grdl" podUID="dd250302-91b0-41c4-b138-89559a78d375" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.382367 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/6d274204-1cbf-4028-8cc7-ae94ad474006-ready\") pod \"6d274204-1cbf-4028-8cc7-ae94ad474006\" (UID: \"6d274204-1cbf-4028-8cc7-ae94ad474006\") " Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.382420 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6d274204-1cbf-4028-8cc7-ae94ad474006-tuning-conf-dir\") pod \"6d274204-1cbf-4028-8cc7-ae94ad474006\" (UID: \"6d274204-1cbf-4028-8cc7-ae94ad474006\") " Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.382460 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/877384d0-10ed-4fca-a600-e8fa69a85648-serving-cert\") pod \"877384d0-10ed-4fca-a600-e8fa69a85648\" (UID: \"877384d0-10ed-4fca-a600-e8fa69a85648\") " Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.382498 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/877384d0-10ed-4fca-a600-e8fa69a85648-config\") pod \"877384d0-10ed-4fca-a600-e8fa69a85648\" (UID: \"877384d0-10ed-4fca-a600-e8fa69a85648\") " Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.382545 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/877384d0-10ed-4fca-a600-e8fa69a85648-client-ca\") pod \"877384d0-10ed-4fca-a600-e8fa69a85648\" (UID: \"877384d0-10ed-4fca-a600-e8fa69a85648\") " Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.382596 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxkjc\" (UniqueName: \"kubernetes.io/projected/a4b3225e-19a4-40c0-b9fe-b93566ed64e9-kube-api-access-qxkjc\") pod \"a4b3225e-19a4-40c0-b9fe-b93566ed64e9\" (UID: \"a4b3225e-19a4-40c0-b9fe-b93566ed64e9\") " Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.382638 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6d274204-1cbf-4028-8cc7-ae94ad474006-cni-sysctl-allowlist\") pod \"6d274204-1cbf-4028-8cc7-ae94ad474006\" (UID: \"6d274204-1cbf-4028-8cc7-ae94ad474006\") " Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.382669 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fjmv\" (UniqueName: \"kubernetes.io/projected/877384d0-10ed-4fca-a600-e8fa69a85648-kube-api-access-6fjmv\") pod \"877384d0-10ed-4fca-a600-e8fa69a85648\" (UID: \"877384d0-10ed-4fca-a600-e8fa69a85648\") " Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.382695 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4b3225e-19a4-40c0-b9fe-b93566ed64e9-client-ca\") pod \"a4b3225e-19a4-40c0-b9fe-b93566ed64e9\" (UID: \"a4b3225e-19a4-40c0-b9fe-b93566ed64e9\") " Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.382727 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b3225e-19a4-40c0-b9fe-b93566ed64e9-config\") pod \"a4b3225e-19a4-40c0-b9fe-b93566ed64e9\" (UID: \"a4b3225e-19a4-40c0-b9fe-b93566ed64e9\") " Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.382752 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4b3225e-19a4-40c0-b9fe-b93566ed64e9-serving-cert\") pod \"a4b3225e-19a4-40c0-b9fe-b93566ed64e9\" (UID: \"a4b3225e-19a4-40c0-b9fe-b93566ed64e9\") " Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.382784 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbx5p\" (UniqueName: \"kubernetes.io/projected/6d274204-1cbf-4028-8cc7-ae94ad474006-kube-api-access-wbx5p\") pod \"6d274204-1cbf-4028-8cc7-ae94ad474006\" (UID: \"6d274204-1cbf-4028-8cc7-ae94ad474006\") " Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.382819 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/877384d0-10ed-4fca-a600-e8fa69a85648-proxy-ca-bundles\") pod \"877384d0-10ed-4fca-a600-e8fa69a85648\" (UID: \"877384d0-10ed-4fca-a600-e8fa69a85648\") " Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.383360 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d274204-1cbf-4028-8cc7-ae94ad474006-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "6d274204-1cbf-4028-8cc7-ae94ad474006" (UID: "6d274204-1cbf-4028-8cc7-ae94ad474006"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.383940 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d274204-1cbf-4028-8cc7-ae94ad474006-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "6d274204-1cbf-4028-8cc7-ae94ad474006" (UID: "6d274204-1cbf-4028-8cc7-ae94ad474006"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.384071 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d274204-1cbf-4028-8cc7-ae94ad474006-ready" (OuterVolumeSpecName: "ready") pod "6d274204-1cbf-4028-8cc7-ae94ad474006" (UID: "6d274204-1cbf-4028-8cc7-ae94ad474006"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.384375 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/877384d0-10ed-4fca-a600-e8fa69a85648-config" (OuterVolumeSpecName: "config") pod "877384d0-10ed-4fca-a600-e8fa69a85648" (UID: "877384d0-10ed-4fca-a600-e8fa69a85648"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.384478 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/877384d0-10ed-4fca-a600-e8fa69a85648-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "877384d0-10ed-4fca-a600-e8fa69a85648" (UID: "877384d0-10ed-4fca-a600-e8fa69a85648"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.384521 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/877384d0-10ed-4fca-a600-e8fa69a85648-client-ca" (OuterVolumeSpecName: "client-ca") pod "877384d0-10ed-4fca-a600-e8fa69a85648" (UID: "877384d0-10ed-4fca-a600-e8fa69a85648"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.384596 4923 scope.go:117] "RemoveContainer" containerID="e01b2d2b82733974aed179fba8278886fccd815af9d5a326585915dbb6ba2916" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.385025 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4b3225e-19a4-40c0-b9fe-b93566ed64e9-client-ca" (OuterVolumeSpecName: "client-ca") pod "a4b3225e-19a4-40c0-b9fe-b93566ed64e9" (UID: "a4b3225e-19a4-40c0-b9fe-b93566ed64e9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.385207 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4b3225e-19a4-40c0-b9fe-b93566ed64e9-config" (OuterVolumeSpecName: "config") pod "a4b3225e-19a4-40c0-b9fe-b93566ed64e9" (UID: "a4b3225e-19a4-40c0-b9fe-b93566ed64e9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.389369 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/877384d0-10ed-4fca-a600-e8fa69a85648-kube-api-access-6fjmv" (OuterVolumeSpecName: "kube-api-access-6fjmv") pod "877384d0-10ed-4fca-a600-e8fa69a85648" (UID: "877384d0-10ed-4fca-a600-e8fa69a85648"). InnerVolumeSpecName "kube-api-access-6fjmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.392010 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4b3225e-19a4-40c0-b9fe-b93566ed64e9-kube-api-access-qxkjc" (OuterVolumeSpecName: "kube-api-access-qxkjc") pod "a4b3225e-19a4-40c0-b9fe-b93566ed64e9" (UID: "a4b3225e-19a4-40c0-b9fe-b93566ed64e9"). InnerVolumeSpecName "kube-api-access-qxkjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.393240 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4b3225e-19a4-40c0-b9fe-b93566ed64e9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a4b3225e-19a4-40c0-b9fe-b93566ed64e9" (UID: "a4b3225e-19a4-40c0-b9fe-b93566ed64e9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.394575 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d274204-1cbf-4028-8cc7-ae94ad474006-kube-api-access-wbx5p" (OuterVolumeSpecName: "kube-api-access-wbx5p") pod "6d274204-1cbf-4028-8cc7-ae94ad474006" (UID: "6d274204-1cbf-4028-8cc7-ae94ad474006"). InnerVolumeSpecName "kube-api-access-wbx5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.404377 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/877384d0-10ed-4fca-a600-e8fa69a85648-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "877384d0-10ed-4fca-a600-e8fa69a85648" (UID: "877384d0-10ed-4fca-a600-e8fa69a85648"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.421206 4923 scope.go:117] "RemoveContainer" containerID="7935c95971714504f5c3af5f486f9b77ae8c83d72793850f565aa187d2b3971e" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.441942 4923 patch_prober.go:28] interesting pod/controller-manager-86b7cf8fdb-k4qsw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.48:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.442004 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw" podUID="877384d0-10ed-4fca-a600-e8fa69a85648" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.48:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.483824 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwh8z\" (UniqueName: \"kubernetes.io/projected/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-kube-api-access-vwh8z\") pod \"controller-manager-7d4b746f48-r5bn4\" (UID: \"2b7b4d68-3228-4b25-b33a-02b363b9e8b6\") " pod="openshift-controller-manager/controller-manager-7d4b746f48-r5bn4" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.483867 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-serving-cert\") pod \"controller-manager-7d4b746f48-r5bn4\" (UID: \"2b7b4d68-3228-4b25-b33a-02b363b9e8b6\") " pod="openshift-controller-manager/controller-manager-7d4b746f48-r5bn4" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.483894 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-config\") pod \"controller-manager-7d4b746f48-r5bn4\" (UID: \"2b7b4d68-3228-4b25-b33a-02b363b9e8b6\") " pod="openshift-controller-manager/controller-manager-7d4b746f48-r5bn4" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.483918 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-proxy-ca-bundles\") pod \"controller-manager-7d4b746f48-r5bn4\" (UID: \"2b7b4d68-3228-4b25-b33a-02b363b9e8b6\") " pod="openshift-controller-manager/controller-manager-7d4b746f48-r5bn4" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.483972 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-client-ca\") pod \"controller-manager-7d4b746f48-r5bn4\" (UID: \"2b7b4d68-3228-4b25-b33a-02b363b9e8b6\") " pod="openshift-controller-manager/controller-manager-7d4b746f48-r5bn4" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.484058 4923 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6d274204-1cbf-4028-8cc7-ae94ad474006-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.484069 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fjmv\" (UniqueName: \"kubernetes.io/projected/877384d0-10ed-4fca-a600-e8fa69a85648-kube-api-access-6fjmv\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.484079 4923 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4b3225e-19a4-40c0-b9fe-b93566ed64e9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.484088 4923 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b3225e-19a4-40c0-b9fe-b93566ed64e9-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.484096 4923 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4b3225e-19a4-40c0-b9fe-b93566ed64e9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.484107 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbx5p\" (UniqueName: \"kubernetes.io/projected/6d274204-1cbf-4028-8cc7-ae94ad474006-kube-api-access-wbx5p\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.484116 4923 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/877384d0-10ed-4fca-a600-e8fa69a85648-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.484127 4923 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/6d274204-1cbf-4028-8cc7-ae94ad474006-ready\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.484135 4923 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6d274204-1cbf-4028-8cc7-ae94ad474006-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.484144 4923 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/877384d0-10ed-4fca-a600-e8fa69a85648-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.484152 4923 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/877384d0-10ed-4fca-a600-e8fa69a85648-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.484160 4923 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/877384d0-10ed-4fca-a600-e8fa69a85648-client-ca\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.484169 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxkjc\" (UniqueName: \"kubernetes.io/projected/a4b3225e-19a4-40c0-b9fe-b93566ed64e9-kube-api-access-qxkjc\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:16 crc kubenswrapper[4923]: E0321 04:20:16.551669 4923 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d274204_1cbf_4028_8cc7_ae94ad474006.slice/crio-7fbf7c6d5c3057bc98f21b2167ed422e96ba8922452a50c2bd14db423cd134b6: Error finding container 7fbf7c6d5c3057bc98f21b2167ed422e96ba8922452a50c2bd14db423cd134b6: Status 404 returned error can't find the container with id 7fbf7c6d5c3057bc98f21b2167ed422e96ba8922452a50c2bd14db423cd134b6 Mar 21 04:20:16 crc kubenswrapper[4923]: E0321 04:20:16.559074 4923 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod877384d0_10ed_4fca_a600_e8fa69a85648.slice/crio-b882137c5a69cec2f2f95566c76913259d102883bf8cd17dd977c2455ad24c0d: Error finding container b882137c5a69cec2f2f95566c76913259d102883bf8cd17dd977c2455ad24c0d: Status 404 returned error can't find the container with id b882137c5a69cec2f2f95566c76913259d102883bf8cd17dd977c2455ad24c0d Mar 21 04:20:16 crc kubenswrapper[4923]: E0321 04:20:16.565374 4923 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4b3225e_19a4_40c0_b9fe_b93566ed64e9.slice/crio-87ff01973b9f70af4dd832ebf4e246b2264c017f99436a984a95802c6516eff8: Error finding container 87ff01973b9f70af4dd832ebf4e246b2264c017f99436a984a95802c6516eff8: Status 404 returned error can't find the container with id 87ff01973b9f70af4dd832ebf4e246b2264c017f99436a984a95802c6516eff8 Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.585192 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwh8z\" (UniqueName: \"kubernetes.io/projected/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-kube-api-access-vwh8z\") pod \"controller-manager-7d4b746f48-r5bn4\" (UID: \"2b7b4d68-3228-4b25-b33a-02b363b9e8b6\") " pod="openshift-controller-manager/controller-manager-7d4b746f48-r5bn4" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.585236 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-serving-cert\") pod \"controller-manager-7d4b746f48-r5bn4\" (UID: \"2b7b4d68-3228-4b25-b33a-02b363b9e8b6\") " pod="openshift-controller-manager/controller-manager-7d4b746f48-r5bn4" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.585258 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-config\") pod \"controller-manager-7d4b746f48-r5bn4\" (UID: \"2b7b4d68-3228-4b25-b33a-02b363b9e8b6\") " pod="openshift-controller-manager/controller-manager-7d4b746f48-r5bn4" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.585276 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-proxy-ca-bundles\") pod \"controller-manager-7d4b746f48-r5bn4\" (UID: \"2b7b4d68-3228-4b25-b33a-02b363b9e8b6\") " pod="openshift-controller-manager/controller-manager-7d4b746f48-r5bn4" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.585300 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-client-ca\") pod \"controller-manager-7d4b746f48-r5bn4\" (UID: \"2b7b4d68-3228-4b25-b33a-02b363b9e8b6\") " pod="openshift-controller-manager/controller-manager-7d4b746f48-r5bn4" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.586554 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-client-ca\") pod \"controller-manager-7d4b746f48-r5bn4\" (UID: \"2b7b4d68-3228-4b25-b33a-02b363b9e8b6\") " pod="openshift-controller-manager/controller-manager-7d4b746f48-r5bn4" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.586812 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-proxy-ca-bundles\") pod \"controller-manager-7d4b746f48-r5bn4\" (UID: \"2b7b4d68-3228-4b25-b33a-02b363b9e8b6\") " pod="openshift-controller-manager/controller-manager-7d4b746f48-r5bn4" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.587127 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-config\") pod \"controller-manager-7d4b746f48-r5bn4\" (UID: \"2b7b4d68-3228-4b25-b33a-02b363b9e8b6\") " pod="openshift-controller-manager/controller-manager-7d4b746f48-r5bn4" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.591524 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-serving-cert\") pod \"controller-manager-7d4b746f48-r5bn4\" (UID: \"2b7b4d68-3228-4b25-b33a-02b363b9e8b6\") " pod="openshift-controller-manager/controller-manager-7d4b746f48-r5bn4" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.603692 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwh8z\" (UniqueName: \"kubernetes.io/projected/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-kube-api-access-vwh8z\") pod \"controller-manager-7d4b746f48-r5bn4\" (UID: \"2b7b4d68-3228-4b25-b33a-02b363b9e8b6\") " pod="openshift-controller-manager/controller-manager-7d4b746f48-r5bn4" Mar 21 04:20:16 crc kubenswrapper[4923]: W0321 04:20:16.651568 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-dac405b51987a24ec9d86a283f65461616436f61316b9fa5673a24dc56494ef2 WatchSource:0}: Error finding container dac405b51987a24ec9d86a283f65461616436f61316b9fa5673a24dc56494ef2: Status 404 returned error can't find the container with id dac405b51987a24ec9d86a283f65461616436f61316b9fa5673a24dc56494ef2 Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.662623 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw"] Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.665592 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-86b7cf8fdb-k4qsw"] Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.676854 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz"] Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.678432 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d4b746f48-r5bn4" Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.682363 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b6c5b7fb5-7jfzz"] Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.685744 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-l24r9"] Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.689446 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-l24r9"] Mar 21 04:20:16 crc kubenswrapper[4923]: W0321 04:20:16.801987 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-4ee5270844be0285dc463c8bdec0de02e0512a1b7c68cbc1aadd36e2f32ee4af WatchSource:0}: Error finding container 4ee5270844be0285dc463c8bdec0de02e0512a1b7c68cbc1aadd36e2f32ee4af: Status 404 returned error can't find the container with id 4ee5270844be0285dc463c8bdec0de02e0512a1b7c68cbc1aadd36e2f32ee4af Mar 21 04:20:16 crc kubenswrapper[4923]: W0321 04:20:16.809607 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-ed41e27553ae5bf7297360862662696e053bba33a5fa89d7f4dcae81ae57a074 WatchSource:0}: Error finding container ed41e27553ae5bf7297360862662696e053bba33a5fa89d7f4dcae81ae57a074: Status 404 returned error can't find the container with id ed41e27553ae5bf7297360862662696e053bba33a5fa89d7f4dcae81ae57a074 Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.933165 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.943681 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d4b746f48-r5bn4"] Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.980012 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567780-l4jnj"] Mar 21 04:20:16 crc kubenswrapper[4923]: I0321 04:20:16.982421 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 21 04:20:16 crc kubenswrapper[4923]: W0321 04:20:16.988064 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd8587a2_6b9d_46d9_aa09_c726997f9681.slice/crio-73ba060d96ec7c584d2c7fb1e8eb0903c04df3cb70095cf96d5ebdaa6cc70be5 WatchSource:0}: Error finding container 73ba060d96ec7c584d2c7fb1e8eb0903c04df3cb70095cf96d5ebdaa6cc70be5: Status 404 returned error can't find the container with id 73ba060d96ec7c584d2c7fb1e8eb0903c04df3cb70095cf96d5ebdaa6cc70be5 Mar 21 04:20:16 crc kubenswrapper[4923]: W0321 04:20:16.989905 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3d694cb6_70ab_4c6b_98ee_9aa819980356.slice/crio-b0a0b1f4ce5623e0647ee025df56fb9ac91147964b8dea3b2d312348d9a5a0b1 WatchSource:0}: Error finding container b0a0b1f4ce5623e0647ee025df56fb9ac91147964b8dea3b2d312348d9a5a0b1: Status 404 returned error can't find the container with id b0a0b1f4ce5623e0647ee025df56fb9ac91147964b8dea3b2d312348d9a5a0b1 Mar 21 04:20:17 crc kubenswrapper[4923]: I0321 04:20:17.358755 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3d694cb6-70ab-4c6b-98ee-9aa819980356","Type":"ContainerStarted","Data":"3defa9f1db1c3bb755d3dde0265c18ddf343df06466e34c33abe79e777f246d7"} Mar 21 04:20:17 crc kubenswrapper[4923]: I0321 04:20:17.359055 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3d694cb6-70ab-4c6b-98ee-9aa819980356","Type":"ContainerStarted","Data":"b0a0b1f4ce5623e0647ee025df56fb9ac91147964b8dea3b2d312348d9a5a0b1"} Mar 21 04:20:17 crc kubenswrapper[4923]: I0321 04:20:17.362553 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"189e8e50b1b81d5c89f8288624fa94c98a83aa874d274220ed93c75182ec4e3b"} Mar 21 04:20:17 crc kubenswrapper[4923]: I0321 04:20:17.362582 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4ee5270844be0285dc463c8bdec0de02e0512a1b7c68cbc1aadd36e2f32ee4af"} Mar 21 04:20:17 crc kubenswrapper[4923]: I0321 04:20:17.364104 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"74ac9f652e86857a105b75e831f101567c425a7dd969ef4efa8236d8ba534ad5"} Mar 21 04:20:17 crc kubenswrapper[4923]: I0321 04:20:17.364150 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"dac405b51987a24ec9d86a283f65461616436f61316b9fa5673a24dc56494ef2"} Mar 21 04:20:17 crc kubenswrapper[4923]: I0321 04:20:17.365123 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567780-l4jnj" event={"ID":"dd8587a2-6b9d-46d9-aa09-c726997f9681","Type":"ContainerStarted","Data":"73ba060d96ec7c584d2c7fb1e8eb0903c04df3cb70095cf96d5ebdaa6cc70be5"} Mar 21 04:20:17 crc kubenswrapper[4923]: I0321 04:20:17.370613 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d4b746f48-r5bn4" event={"ID":"2b7b4d68-3228-4b25-b33a-02b363b9e8b6","Type":"ContainerStarted","Data":"da2b5ad27afb0d99216883c7ddf3d5c866fa8946c37a0f588fe55a90f37bc620"} Mar 21 04:20:17 crc kubenswrapper[4923]: I0321 04:20:17.370647 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d4b746f48-r5bn4" event={"ID":"2b7b4d68-3228-4b25-b33a-02b363b9e8b6","Type":"ContainerStarted","Data":"0437966b44cbc113e87ccc4680322e05b8c9ca400d6938352cf491475c764287"} Mar 21 04:20:17 crc kubenswrapper[4923]: I0321 04:20:17.371384 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7d4b746f48-r5bn4" Mar 21 04:20:17 crc kubenswrapper[4923]: I0321 04:20:17.372502 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7f1583f3-0853-456e-8ea6-8019561c68dd","Type":"ContainerStarted","Data":"83e87fd1d0ee5cc043e19cd879de56b5554c930e8ad3992809fd7872326e6509"} Mar 21 04:20:17 crc kubenswrapper[4923]: I0321 04:20:17.372527 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7f1583f3-0853-456e-8ea6-8019561c68dd","Type":"ContainerStarted","Data":"23f425819b71b9a61d0170df8aae57e689eae8f5b502ae012a321ea610134060"} Mar 21 04:20:17 crc kubenswrapper[4923]: I0321 04:20:17.374454 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"541417d0cecfb1d958a1c3c35eb09fe7ee78bd5fb02bc979b1bb0a0ab5f8dbb0"} Mar 21 04:20:17 crc kubenswrapper[4923]: I0321 04:20:17.374486 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ed41e27553ae5bf7297360862662696e053bba33a5fa89d7f4dcae81ae57a074"} Mar 21 04:20:17 crc kubenswrapper[4923]: I0321 04:20:17.374844 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:20:17 crc kubenswrapper[4923]: I0321 04:20:17.379236 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.379220802 podStartE2EDuration="2.379220802s" podCreationTimestamp="2026-03-21 04:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:20:17.377515341 +0000 UTC m=+182.530526428" watchObservedRunningTime="2026-03-21 04:20:17.379220802 +0000 UTC m=+182.532231889" Mar 21 04:20:17 crc kubenswrapper[4923]: I0321 04:20:17.380675 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7d4b746f48-r5bn4" Mar 21 04:20:17 crc kubenswrapper[4923]: I0321 04:20:17.400403 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7d4b746f48-r5bn4" podStartSLOduration=5.400385555 podStartE2EDuration="5.400385555s" podCreationTimestamp="2026-03-21 04:20:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:20:17.398775827 +0000 UTC m=+182.551786924" watchObservedRunningTime="2026-03-21 04:20:17.400385555 +0000 UTC m=+182.553396642" Mar 21 04:20:17 crc kubenswrapper[4923]: I0321 04:20:17.438653 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=7.438633859 podStartE2EDuration="7.438633859s" podCreationTimestamp="2026-03-21 04:20:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:20:17.413147277 +0000 UTC m=+182.566158364" watchObservedRunningTime="2026-03-21 04:20:17.438633859 +0000 UTC m=+182.591644946" Mar 21 04:20:17 crc kubenswrapper[4923]: E0321 04:20:17.457474 4923 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4b3225e_19a4_40c0_b9fe_b93566ed64e9.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod877384d0_10ed_4fca_a600_e8fa69a85648.slice\": RecentStats: unable to find data in memory cache]" Mar 21 04:20:18 crc kubenswrapper[4923]: I0321 04:20:18.367102 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d274204-1cbf-4028-8cc7-ae94ad474006" path="/var/lib/kubelet/pods/6d274204-1cbf-4028-8cc7-ae94ad474006/volumes" Mar 21 04:20:18 crc kubenswrapper[4923]: I0321 04:20:18.367886 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="877384d0-10ed-4fca-a600-e8fa69a85648" path="/var/lib/kubelet/pods/877384d0-10ed-4fca-a600-e8fa69a85648/volumes" Mar 21 04:20:18 crc kubenswrapper[4923]: I0321 04:20:18.368405 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4b3225e-19a4-40c0-b9fe-b93566ed64e9" path="/var/lib/kubelet/pods/a4b3225e-19a4-40c0-b9fe-b93566ed64e9/volumes" Mar 21 04:20:18 crc kubenswrapper[4923]: I0321 04:20:18.385636 4923 generic.go:334] "Generic (PLEG): container finished" podID="7f1583f3-0853-456e-8ea6-8019561c68dd" containerID="83e87fd1d0ee5cc043e19cd879de56b5554c930e8ad3992809fd7872326e6509" exitCode=0 Mar 21 04:20:18 crc kubenswrapper[4923]: I0321 04:20:18.385993 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7f1583f3-0853-456e-8ea6-8019561c68dd","Type":"ContainerDied","Data":"83e87fd1d0ee5cc043e19cd879de56b5554c930e8ad3992809fd7872326e6509"} Mar 21 04:20:18 crc kubenswrapper[4923]: I0321 04:20:18.905860 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl"] Mar 21 04:20:18 crc kubenswrapper[4923]: I0321 04:20:18.906758 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl" Mar 21 04:20:18 crc kubenswrapper[4923]: I0321 04:20:18.909130 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 21 04:20:18 crc kubenswrapper[4923]: I0321 04:20:18.909277 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 21 04:20:18 crc kubenswrapper[4923]: I0321 04:20:18.909589 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 21 04:20:18 crc kubenswrapper[4923]: I0321 04:20:18.909792 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 21 04:20:18 crc kubenswrapper[4923]: I0321 04:20:18.909919 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 21 04:20:18 crc kubenswrapper[4923]: I0321 04:20:18.913682 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 21 04:20:18 crc kubenswrapper[4923]: I0321 04:20:18.928732 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl"] Mar 21 04:20:19 crc kubenswrapper[4923]: I0321 04:20:19.019115 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3e11436-0865-4e63-a0c4-65af594ee22b-client-ca\") pod \"route-controller-manager-b7956ff5-6zqdl\" (UID: \"d3e11436-0865-4e63-a0c4-65af594ee22b\") " pod="openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl" Mar 21 04:20:19 crc kubenswrapper[4923]: I0321 04:20:19.019580 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3e11436-0865-4e63-a0c4-65af594ee22b-config\") pod \"route-controller-manager-b7956ff5-6zqdl\" (UID: \"d3e11436-0865-4e63-a0c4-65af594ee22b\") " pod="openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl" Mar 21 04:20:19 crc kubenswrapper[4923]: I0321 04:20:19.019646 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47pdk\" (UniqueName: \"kubernetes.io/projected/d3e11436-0865-4e63-a0c4-65af594ee22b-kube-api-access-47pdk\") pod \"route-controller-manager-b7956ff5-6zqdl\" (UID: \"d3e11436-0865-4e63-a0c4-65af594ee22b\") " pod="openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl" Mar 21 04:20:19 crc kubenswrapper[4923]: I0321 04:20:19.019685 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3e11436-0865-4e63-a0c4-65af594ee22b-serving-cert\") pod \"route-controller-manager-b7956ff5-6zqdl\" (UID: \"d3e11436-0865-4e63-a0c4-65af594ee22b\") " pod="openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl" Mar 21 04:20:19 crc kubenswrapper[4923]: I0321 04:20:19.121634 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3e11436-0865-4e63-a0c4-65af594ee22b-client-ca\") pod \"route-controller-manager-b7956ff5-6zqdl\" (UID: \"d3e11436-0865-4e63-a0c4-65af594ee22b\") " pod="openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl" Mar 21 04:20:19 crc kubenswrapper[4923]: I0321 04:20:19.121755 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3e11436-0865-4e63-a0c4-65af594ee22b-config\") pod \"route-controller-manager-b7956ff5-6zqdl\" (UID: \"d3e11436-0865-4e63-a0c4-65af594ee22b\") " pod="openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl" Mar 21 04:20:19 crc kubenswrapper[4923]: I0321 04:20:19.121783 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47pdk\" (UniqueName: \"kubernetes.io/projected/d3e11436-0865-4e63-a0c4-65af594ee22b-kube-api-access-47pdk\") pod \"route-controller-manager-b7956ff5-6zqdl\" (UID: \"d3e11436-0865-4e63-a0c4-65af594ee22b\") " pod="openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl" Mar 21 04:20:19 crc kubenswrapper[4923]: I0321 04:20:19.121802 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3e11436-0865-4e63-a0c4-65af594ee22b-serving-cert\") pod \"route-controller-manager-b7956ff5-6zqdl\" (UID: \"d3e11436-0865-4e63-a0c4-65af594ee22b\") " pod="openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl" Mar 21 04:20:19 crc kubenswrapper[4923]: I0321 04:20:19.122588 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3e11436-0865-4e63-a0c4-65af594ee22b-client-ca\") pod \"route-controller-manager-b7956ff5-6zqdl\" (UID: \"d3e11436-0865-4e63-a0c4-65af594ee22b\") " pod="openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl" Mar 21 04:20:19 crc kubenswrapper[4923]: I0321 04:20:19.123770 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3e11436-0865-4e63-a0c4-65af594ee22b-config\") pod \"route-controller-manager-b7956ff5-6zqdl\" (UID: \"d3e11436-0865-4e63-a0c4-65af594ee22b\") " pod="openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl" Mar 21 04:20:19 crc kubenswrapper[4923]: I0321 04:20:19.133104 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3e11436-0865-4e63-a0c4-65af594ee22b-serving-cert\") pod \"route-controller-manager-b7956ff5-6zqdl\" (UID: \"d3e11436-0865-4e63-a0c4-65af594ee22b\") " pod="openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl" Mar 21 04:20:19 crc kubenswrapper[4923]: I0321 04:20:19.138973 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47pdk\" (UniqueName: \"kubernetes.io/projected/d3e11436-0865-4e63-a0c4-65af594ee22b-kube-api-access-47pdk\") pod \"route-controller-manager-b7956ff5-6zqdl\" (UID: \"d3e11436-0865-4e63-a0c4-65af594ee22b\") " pod="openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl" Mar 21 04:20:19 crc kubenswrapper[4923]: I0321 04:20:19.232029 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl" Mar 21 04:20:19 crc kubenswrapper[4923]: I0321 04:20:19.461658 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl"] Mar 21 04:20:19 crc kubenswrapper[4923]: W0321 04:20:19.484296 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3e11436_0865_4e63_a0c4_65af594ee22b.slice/crio-456c4f22cd58becce48ad2b95c1689fef2e5ea1e827bee951e3b60f0cea4a788 WatchSource:0}: Error finding container 456c4f22cd58becce48ad2b95c1689fef2e5ea1e827bee951e3b60f0cea4a788: Status 404 returned error can't find the container with id 456c4f22cd58becce48ad2b95c1689fef2e5ea1e827bee951e3b60f0cea4a788 Mar 21 04:20:19 crc kubenswrapper[4923]: I0321 04:20:19.609789 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 21 04:20:19 crc kubenswrapper[4923]: I0321 04:20:19.731537 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f1583f3-0853-456e-8ea6-8019561c68dd-kubelet-dir\") pod \"7f1583f3-0853-456e-8ea6-8019561c68dd\" (UID: \"7f1583f3-0853-456e-8ea6-8019561c68dd\") " Mar 21 04:20:19 crc kubenswrapper[4923]: I0321 04:20:19.731636 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f1583f3-0853-456e-8ea6-8019561c68dd-kube-api-access\") pod \"7f1583f3-0853-456e-8ea6-8019561c68dd\" (UID: \"7f1583f3-0853-456e-8ea6-8019561c68dd\") " Mar 21 04:20:19 crc kubenswrapper[4923]: I0321 04:20:19.732265 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f1583f3-0853-456e-8ea6-8019561c68dd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7f1583f3-0853-456e-8ea6-8019561c68dd" (UID: "7f1583f3-0853-456e-8ea6-8019561c68dd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:20:19 crc kubenswrapper[4923]: I0321 04:20:19.740633 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f1583f3-0853-456e-8ea6-8019561c68dd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7f1583f3-0853-456e-8ea6-8019561c68dd" (UID: "7f1583f3-0853-456e-8ea6-8019561c68dd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:20:19 crc kubenswrapper[4923]: I0321 04:20:19.832705 4923 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f1583f3-0853-456e-8ea6-8019561c68dd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:19 crc kubenswrapper[4923]: I0321 04:20:19.832745 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f1583f3-0853-456e-8ea6-8019561c68dd-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:20 crc kubenswrapper[4923]: I0321 04:20:20.399028 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7f1583f3-0853-456e-8ea6-8019561c68dd","Type":"ContainerDied","Data":"23f425819b71b9a61d0170df8aae57e689eae8f5b502ae012a321ea610134060"} Mar 21 04:20:20 crc kubenswrapper[4923]: I0321 04:20:20.399356 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23f425819b71b9a61d0170df8aae57e689eae8f5b502ae012a321ea610134060" Mar 21 04:20:20 crc kubenswrapper[4923]: I0321 04:20:20.399423 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 21 04:20:20 crc kubenswrapper[4923]: I0321 04:20:20.401835 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl" event={"ID":"d3e11436-0865-4e63-a0c4-65af594ee22b","Type":"ContainerStarted","Data":"ed9ed7b1c87ded588a6bf1b1576c7fbb891bc20194ed6ec8c3acea61b8da226a"} Mar 21 04:20:20 crc kubenswrapper[4923]: I0321 04:20:20.401871 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl" event={"ID":"d3e11436-0865-4e63-a0c4-65af594ee22b","Type":"ContainerStarted","Data":"456c4f22cd58becce48ad2b95c1689fef2e5ea1e827bee951e3b60f0cea4a788"} Mar 21 04:20:20 crc kubenswrapper[4923]: I0321 04:20:20.402168 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl" Mar 21 04:20:20 crc kubenswrapper[4923]: I0321 04:20:20.406943 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl" Mar 21 04:20:20 crc kubenswrapper[4923]: I0321 04:20:20.426461 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl" podStartSLOduration=8.426441603 podStartE2EDuration="8.426441603s" podCreationTimestamp="2026-03-21 04:20:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:20:20.420146844 +0000 UTC m=+185.573157931" watchObservedRunningTime="2026-03-21 04:20:20.426441603 +0000 UTC m=+185.579452690" Mar 21 04:20:24 crc kubenswrapper[4923]: I0321 04:20:24.400589 4923 csr.go:261] certificate signing request csr-nnp7p is approved, waiting to be issued Mar 21 04:20:24 crc kubenswrapper[4923]: I0321 04:20:24.405093 4923 csr.go:257] certificate signing request csr-nnp7p is issued Mar 21 04:20:24 crc kubenswrapper[4923]: I0321 04:20:24.422497 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567780-l4jnj" event={"ID":"dd8587a2-6b9d-46d9-aa09-c726997f9681","Type":"ContainerStarted","Data":"99600ca879154f694f6a2dcf96cd3009c8ef61b7c0fef63d979a2e1140df867f"} Mar 21 04:20:24 crc kubenswrapper[4923]: I0321 04:20:24.435771 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29567780-l4jnj" podStartSLOduration=17.595294416 podStartE2EDuration="24.435753537s" podCreationTimestamp="2026-03-21 04:20:00 +0000 UTC" firstStartedPulling="2026-03-21 04:20:16.993575084 +0000 UTC m=+182.146586171" lastFinishedPulling="2026-03-21 04:20:23.834034205 +0000 UTC m=+188.987045292" observedRunningTime="2026-03-21 04:20:24.435146218 +0000 UTC m=+189.588157305" watchObservedRunningTime="2026-03-21 04:20:24.435753537 +0000 UTC m=+189.588764624" Mar 21 04:20:25 crc kubenswrapper[4923]: I0321 04:20:25.406915 4923 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-16 04:06:19.951480238 +0000 UTC Mar 21 04:20:25 crc kubenswrapper[4923]: I0321 04:20:25.408155 4923 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 5759h45m54.543331848s for next certificate rotation Mar 21 04:20:25 crc kubenswrapper[4923]: I0321 04:20:25.429870 4923 generic.go:334] "Generic (PLEG): container finished" podID="dd8587a2-6b9d-46d9-aa09-c726997f9681" containerID="99600ca879154f694f6a2dcf96cd3009c8ef61b7c0fef63d979a2e1140df867f" exitCode=0 Mar 21 04:20:25 crc kubenswrapper[4923]: I0321 04:20:25.429923 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567780-l4jnj" event={"ID":"dd8587a2-6b9d-46d9-aa09-c726997f9681","Type":"ContainerDied","Data":"99600ca879154f694f6a2dcf96cd3009c8ef61b7c0fef63d979a2e1140df867f"} Mar 21 04:20:26 crc kubenswrapper[4923]: I0321 04:20:26.408752 4923 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-18 13:16:21.07737441 +0000 UTC Mar 21 04:20:26 crc kubenswrapper[4923]: I0321 04:20:26.409370 4923 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6536h55m54.668009204s for next certificate rotation Mar 21 04:20:26 crc kubenswrapper[4923]: I0321 04:20:26.724670 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567780-l4jnj" Mar 21 04:20:26 crc kubenswrapper[4923]: I0321 04:20:26.826964 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrv4t\" (UniqueName: \"kubernetes.io/projected/dd8587a2-6b9d-46d9-aa09-c726997f9681-kube-api-access-zrv4t\") pod \"dd8587a2-6b9d-46d9-aa09-c726997f9681\" (UID: \"dd8587a2-6b9d-46d9-aa09-c726997f9681\") " Mar 21 04:20:26 crc kubenswrapper[4923]: I0321 04:20:26.832323 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd8587a2-6b9d-46d9-aa09-c726997f9681-kube-api-access-zrv4t" (OuterVolumeSpecName: "kube-api-access-zrv4t") pod "dd8587a2-6b9d-46d9-aa09-c726997f9681" (UID: "dd8587a2-6b9d-46d9-aa09-c726997f9681"). InnerVolumeSpecName "kube-api-access-zrv4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:20:26 crc kubenswrapper[4923]: I0321 04:20:26.927952 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrv4t\" (UniqueName: \"kubernetes.io/projected/dd8587a2-6b9d-46d9-aa09-c726997f9681-kube-api-access-zrv4t\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:27 crc kubenswrapper[4923]: I0321 04:20:27.439469 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567780-l4jnj" event={"ID":"dd8587a2-6b9d-46d9-aa09-c726997f9681","Type":"ContainerDied","Data":"73ba060d96ec7c584d2c7fb1e8eb0903c04df3cb70095cf96d5ebdaa6cc70be5"} Mar 21 04:20:27 crc kubenswrapper[4923]: I0321 04:20:27.439878 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73ba060d96ec7c584d2c7fb1e8eb0903c04df3cb70095cf96d5ebdaa6cc70be5" Mar 21 04:20:27 crc kubenswrapper[4923]: I0321 04:20:27.439516 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567780-l4jnj" Mar 21 04:20:27 crc kubenswrapper[4923]: E0321 04:20:27.578020 4923 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod877384d0_10ed_4fca_a600_e8fa69a85648.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4b3225e_19a4_40c0_b9fe_b93566ed64e9.slice\": RecentStats: unable to find data in memory cache]" Mar 21 04:20:28 crc kubenswrapper[4923]: I0321 04:20:28.447012 4923 generic.go:334] "Generic (PLEG): container finished" podID="85f7800b-eaa8-45bd-95b4-ee4885cadf52" containerID="1f654587163751d425bafcfd05d9df6befa8daec49704778cbb991391feba002" exitCode=0 Mar 21 04:20:28 crc kubenswrapper[4923]: I0321 04:20:28.447108 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hgbdf" event={"ID":"85f7800b-eaa8-45bd-95b4-ee4885cadf52","Type":"ContainerDied","Data":"1f654587163751d425bafcfd05d9df6befa8daec49704778cbb991391feba002"} Mar 21 04:20:28 crc kubenswrapper[4923]: I0321 04:20:28.455940 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lcp5h" event={"ID":"ded4d513-cc92-405c-8009-913f9aa7ea5f","Type":"ContainerStarted","Data":"f75b56df386d23181b95dd676fb15bfe29d3f5074710ffeba886812ab19a87f3"} Mar 21 04:20:28 crc kubenswrapper[4923]: I0321 04:20:28.467720 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v7tdq" event={"ID":"44513839-d352-4720-a552-bc11c6030391","Type":"ContainerStarted","Data":"6ecd92432efd87b8d6c313e494fee0df2e7c9b6e9b2e4d2d914471e7f1811ff0"} Mar 21 04:20:29 crc kubenswrapper[4923]: I0321 04:20:29.474996 4923 generic.go:334] "Generic (PLEG): container finished" podID="ded4d513-cc92-405c-8009-913f9aa7ea5f" containerID="f75b56df386d23181b95dd676fb15bfe29d3f5074710ffeba886812ab19a87f3" exitCode=0 Mar 21 04:20:29 crc kubenswrapper[4923]: I0321 04:20:29.475066 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lcp5h" event={"ID":"ded4d513-cc92-405c-8009-913f9aa7ea5f","Type":"ContainerDied","Data":"f75b56df386d23181b95dd676fb15bfe29d3f5074710ffeba886812ab19a87f3"} Mar 21 04:20:29 crc kubenswrapper[4923]: I0321 04:20:29.478665 4923 generic.go:334] "Generic (PLEG): container finished" podID="44513839-d352-4720-a552-bc11c6030391" containerID="6ecd92432efd87b8d6c313e494fee0df2e7c9b6e9b2e4d2d914471e7f1811ff0" exitCode=0 Mar 21 04:20:29 crc kubenswrapper[4923]: I0321 04:20:29.478703 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v7tdq" event={"ID":"44513839-d352-4720-a552-bc11c6030391","Type":"ContainerDied","Data":"6ecd92432efd87b8d6c313e494fee0df2e7c9b6e9b2e4d2d914471e7f1811ff0"} Mar 21 04:20:31 crc kubenswrapper[4923]: I0321 04:20:31.495115 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t2p5d" event={"ID":"d0e9b7e6-79e7-47f0-a488-182df8bb166e","Type":"ContainerStarted","Data":"4e6eea6e579b008a25a73cc45d531895353c502bb2d7dda375fdbc14ffa2b45a"} Mar 21 04:20:31 crc kubenswrapper[4923]: I0321 04:20:31.497094 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v7tdq" event={"ID":"44513839-d352-4720-a552-bc11c6030391","Type":"ContainerStarted","Data":"c88da9b680adeecf068170339af886890e0774f7272ad159acfedfd05c754dd1"} Mar 21 04:20:31 crc kubenswrapper[4923]: I0321 04:20:31.500443 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hq8bw" event={"ID":"3dd7868c-7d3e-43de-b0f9-1ab51280fce5","Type":"ContainerStarted","Data":"2c1bb8bb8d78304006acade734b21fe50a436b3a08c52418ce58f0bd02d6c49b"} Mar 21 04:20:31 crc kubenswrapper[4923]: I0321 04:20:31.502762 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hgbdf" event={"ID":"85f7800b-eaa8-45bd-95b4-ee4885cadf52","Type":"ContainerStarted","Data":"ef75080c25fb7d9d3207438cf12743c3b72b585e9c65cff6fac5646a8e86b910"} Mar 21 04:20:31 crc kubenswrapper[4923]: I0321 04:20:31.517550 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5twmg" event={"ID":"3b1d4e5a-6b46-4203-ba69-6440844e48ad","Type":"ContainerStarted","Data":"ae4b4e4f20f8bbbc294061eb7b4ce7e0e25fcbf71dc46de1f56194f88e69cf88"} Mar 21 04:20:31 crc kubenswrapper[4923]: I0321 04:20:31.529745 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lcp5h" event={"ID":"ded4d513-cc92-405c-8009-913f9aa7ea5f","Type":"ContainerStarted","Data":"7abf8fd90cfbc646ecb4876e1c2c40036d7c3c5b7d2c26d9692662a4875b8bcd"} Mar 21 04:20:31 crc kubenswrapper[4923]: I0321 04:20:31.546669 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v7tdq" podStartSLOduration=2.390671405 podStartE2EDuration="54.54665052s" podCreationTimestamp="2026-03-21 04:19:37 +0000 UTC" firstStartedPulling="2026-03-21 04:19:38.88499726 +0000 UTC m=+144.038008347" lastFinishedPulling="2026-03-21 04:20:31.040976375 +0000 UTC m=+196.193987462" observedRunningTime="2026-03-21 04:20:31.544114752 +0000 UTC m=+196.697125839" watchObservedRunningTime="2026-03-21 04:20:31.54665052 +0000 UTC m=+196.699661607" Mar 21 04:20:31 crc kubenswrapper[4923]: I0321 04:20:31.562120 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hgbdf" podStartSLOduration=3.567456694 podStartE2EDuration="55.562101297s" podCreationTimestamp="2026-03-21 04:19:36 +0000 UTC" firstStartedPulling="2026-03-21 04:19:37.832535851 +0000 UTC m=+142.985546938" lastFinishedPulling="2026-03-21 04:20:29.827180454 +0000 UTC m=+194.980191541" observedRunningTime="2026-03-21 04:20:31.56156254 +0000 UTC m=+196.714573647" watchObservedRunningTime="2026-03-21 04:20:31.562101297 +0000 UTC m=+196.715112384" Mar 21 04:20:31 crc kubenswrapper[4923]: I0321 04:20:31.616431 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lcp5h" podStartSLOduration=2.475307263 podStartE2EDuration="54.616413801s" podCreationTimestamp="2026-03-21 04:19:37 +0000 UTC" firstStartedPulling="2026-03-21 04:19:38.907635497 +0000 UTC m=+144.060646584" lastFinishedPulling="2026-03-21 04:20:31.048742035 +0000 UTC m=+196.201753122" observedRunningTime="2026-03-21 04:20:31.614296615 +0000 UTC m=+196.767307712" watchObservedRunningTime="2026-03-21 04:20:31.616413801 +0000 UTC m=+196.769424888" Mar 21 04:20:32 crc kubenswrapper[4923]: I0321 04:20:32.535712 4923 generic.go:334] "Generic (PLEG): container finished" podID="d0e9b7e6-79e7-47f0-a488-182df8bb166e" containerID="4e6eea6e579b008a25a73cc45d531895353c502bb2d7dda375fdbc14ffa2b45a" exitCode=0 Mar 21 04:20:32 crc kubenswrapper[4923]: I0321 04:20:32.535795 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t2p5d" event={"ID":"d0e9b7e6-79e7-47f0-a488-182df8bb166e","Type":"ContainerDied","Data":"4e6eea6e579b008a25a73cc45d531895353c502bb2d7dda375fdbc14ffa2b45a"} Mar 21 04:20:32 crc kubenswrapper[4923]: I0321 04:20:32.549716 4923 generic.go:334] "Generic (PLEG): container finished" podID="3dd7868c-7d3e-43de-b0f9-1ab51280fce5" containerID="2c1bb8bb8d78304006acade734b21fe50a436b3a08c52418ce58f0bd02d6c49b" exitCode=0 Mar 21 04:20:32 crc kubenswrapper[4923]: I0321 04:20:32.549779 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hq8bw" event={"ID":"3dd7868c-7d3e-43de-b0f9-1ab51280fce5","Type":"ContainerDied","Data":"2c1bb8bb8d78304006acade734b21fe50a436b3a08c52418ce58f0bd02d6c49b"} Mar 21 04:20:32 crc kubenswrapper[4923]: I0321 04:20:32.552715 4923 generic.go:334] "Generic (PLEG): container finished" podID="3b1d4e5a-6b46-4203-ba69-6440844e48ad" containerID="ae4b4e4f20f8bbbc294061eb7b4ce7e0e25fcbf71dc46de1f56194f88e69cf88" exitCode=0 Mar 21 04:20:32 crc kubenswrapper[4923]: I0321 04:20:32.552785 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5twmg" event={"ID":"3b1d4e5a-6b46-4203-ba69-6440844e48ad","Type":"ContainerDied","Data":"ae4b4e4f20f8bbbc294061eb7b4ce7e0e25fcbf71dc46de1f56194f88e69cf88"} Mar 21 04:20:32 crc kubenswrapper[4923]: I0321 04:20:32.747670 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7d4b746f48-r5bn4"] Mar 21 04:20:32 crc kubenswrapper[4923]: I0321 04:20:32.748493 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7d4b746f48-r5bn4" podUID="2b7b4d68-3228-4b25-b33a-02b363b9e8b6" containerName="controller-manager" containerID="cri-o://da2b5ad27afb0d99216883c7ddf3d5c866fa8946c37a0f588fe55a90f37bc620" gracePeriod=30 Mar 21 04:20:32 crc kubenswrapper[4923]: I0321 04:20:32.776734 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl"] Mar 21 04:20:32 crc kubenswrapper[4923]: I0321 04:20:32.776994 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl" podUID="d3e11436-0865-4e63-a0c4-65af594ee22b" containerName="route-controller-manager" containerID="cri-o://ed9ed7b1c87ded588a6bf1b1576c7fbb891bc20194ed6ec8c3acea61b8da226a" gracePeriod=30 Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.345807 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d4b746f48-r5bn4" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.509479 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-config\") pod \"2b7b4d68-3228-4b25-b33a-02b363b9e8b6\" (UID: \"2b7b4d68-3228-4b25-b33a-02b363b9e8b6\") " Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.509523 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-proxy-ca-bundles\") pod \"2b7b4d68-3228-4b25-b33a-02b363b9e8b6\" (UID: \"2b7b4d68-3228-4b25-b33a-02b363b9e8b6\") " Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.509555 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-serving-cert\") pod \"2b7b4d68-3228-4b25-b33a-02b363b9e8b6\" (UID: \"2b7b4d68-3228-4b25-b33a-02b363b9e8b6\") " Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.509633 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwh8z\" (UniqueName: \"kubernetes.io/projected/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-kube-api-access-vwh8z\") pod \"2b7b4d68-3228-4b25-b33a-02b363b9e8b6\" (UID: \"2b7b4d68-3228-4b25-b33a-02b363b9e8b6\") " Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.509651 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-client-ca\") pod \"2b7b4d68-3228-4b25-b33a-02b363b9e8b6\" (UID: \"2b7b4d68-3228-4b25-b33a-02b363b9e8b6\") " Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.510562 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-client-ca" (OuterVolumeSpecName: "client-ca") pod "2b7b4d68-3228-4b25-b33a-02b363b9e8b6" (UID: "2b7b4d68-3228-4b25-b33a-02b363b9e8b6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.510576 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2b7b4d68-3228-4b25-b33a-02b363b9e8b6" (UID: "2b7b4d68-3228-4b25-b33a-02b363b9e8b6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.510848 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-config" (OuterVolumeSpecName: "config") pod "2b7b4d68-3228-4b25-b33a-02b363b9e8b6" (UID: "2b7b4d68-3228-4b25-b33a-02b363b9e8b6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.515287 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2b7b4d68-3228-4b25-b33a-02b363b9e8b6" (UID: "2b7b4d68-3228-4b25-b33a-02b363b9e8b6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.520416 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-kube-api-access-vwh8z" (OuterVolumeSpecName: "kube-api-access-vwh8z") pod "2b7b4d68-3228-4b25-b33a-02b363b9e8b6" (UID: "2b7b4d68-3228-4b25-b33a-02b363b9e8b6"). InnerVolumeSpecName "kube-api-access-vwh8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.559838 4923 generic.go:334] "Generic (PLEG): container finished" podID="d3e11436-0865-4e63-a0c4-65af594ee22b" containerID="ed9ed7b1c87ded588a6bf1b1576c7fbb891bc20194ed6ec8c3acea61b8da226a" exitCode=0 Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.559895 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl" event={"ID":"d3e11436-0865-4e63-a0c4-65af594ee22b","Type":"ContainerDied","Data":"ed9ed7b1c87ded588a6bf1b1576c7fbb891bc20194ed6ec8c3acea61b8da226a"} Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.561049 4923 generic.go:334] "Generic (PLEG): container finished" podID="2b7b4d68-3228-4b25-b33a-02b363b9e8b6" containerID="da2b5ad27afb0d99216883c7ddf3d5c866fa8946c37a0f588fe55a90f37bc620" exitCode=0 Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.561092 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d4b746f48-r5bn4" event={"ID":"2b7b4d68-3228-4b25-b33a-02b363b9e8b6","Type":"ContainerDied","Data":"da2b5ad27afb0d99216883c7ddf3d5c866fa8946c37a0f588fe55a90f37bc620"} Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.561116 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d4b746f48-r5bn4" event={"ID":"2b7b4d68-3228-4b25-b33a-02b363b9e8b6","Type":"ContainerDied","Data":"0437966b44cbc113e87ccc4680322e05b8c9ca400d6938352cf491475c764287"} Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.561131 4923 scope.go:117] "RemoveContainer" containerID="da2b5ad27afb0d99216883c7ddf3d5c866fa8946c37a0f588fe55a90f37bc620" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.561231 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d4b746f48-r5bn4" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.607451 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7d4b746f48-r5bn4"] Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.612516 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwh8z\" (UniqueName: \"kubernetes.io/projected/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-kube-api-access-vwh8z\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.612548 4923 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-client-ca\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.612559 4923 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.612567 4923 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.612575 4923 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b7b4d68-3228-4b25-b33a-02b363b9e8b6-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.614999 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7d4b746f48-r5bn4"] Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.915176 4923 scope.go:117] "RemoveContainer" containerID="da2b5ad27afb0d99216883c7ddf3d5c866fa8946c37a0f588fe55a90f37bc620" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.915397 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-84bddf74b8-qhfnk"] Mar 21 04:20:33 crc kubenswrapper[4923]: E0321 04:20:33.915667 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da2b5ad27afb0d99216883c7ddf3d5c866fa8946c37a0f588fe55a90f37bc620\": container with ID starting with da2b5ad27afb0d99216883c7ddf3d5c866fa8946c37a0f588fe55a90f37bc620 not found: ID does not exist" containerID="da2b5ad27afb0d99216883c7ddf3d5c866fa8946c37a0f588fe55a90f37bc620" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.915708 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da2b5ad27afb0d99216883c7ddf3d5c866fa8946c37a0f588fe55a90f37bc620"} err="failed to get container status \"da2b5ad27afb0d99216883c7ddf3d5c866fa8946c37a0f588fe55a90f37bc620\": rpc error: code = NotFound desc = could not find container \"da2b5ad27afb0d99216883c7ddf3d5c866fa8946c37a0f588fe55a90f37bc620\": container with ID starting with da2b5ad27afb0d99216883c7ddf3d5c866fa8946c37a0f588fe55a90f37bc620 not found: ID does not exist" Mar 21 04:20:33 crc kubenswrapper[4923]: E0321 04:20:33.915903 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd8587a2-6b9d-46d9-aa09-c726997f9681" containerName="oc" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.915919 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd8587a2-6b9d-46d9-aa09-c726997f9681" containerName="oc" Mar 21 04:20:33 crc kubenswrapper[4923]: E0321 04:20:33.915938 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f1583f3-0853-456e-8ea6-8019561c68dd" containerName="pruner" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.915946 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f1583f3-0853-456e-8ea6-8019561c68dd" containerName="pruner" Mar 21 04:20:33 crc kubenswrapper[4923]: E0321 04:20:33.915965 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b7b4d68-3228-4b25-b33a-02b363b9e8b6" containerName="controller-manager" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.915972 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b7b4d68-3228-4b25-b33a-02b363b9e8b6" containerName="controller-manager" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.916097 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b7b4d68-3228-4b25-b33a-02b363b9e8b6" containerName="controller-manager" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.916124 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd8587a2-6b9d-46d9-aa09-c726997f9681" containerName="oc" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.916139 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f1583f3-0853-456e-8ea6-8019561c68dd" containerName="pruner" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.916572 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84bddf74b8-qhfnk" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.923957 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.924371 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.924506 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.924538 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.924506 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.926457 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.940692 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 21 04:20:33 crc kubenswrapper[4923]: I0321 04:20:33.949357 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84bddf74b8-qhfnk"] Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.016321 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-serving-cert\") pod \"controller-manager-84bddf74b8-qhfnk\" (UID: \"affe5b9e-4cfb-451b-9cac-abe1a00d0f00\") " pod="openshift-controller-manager/controller-manager-84bddf74b8-qhfnk" Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.016397 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-config\") pod \"controller-manager-84bddf74b8-qhfnk\" (UID: \"affe5b9e-4cfb-451b-9cac-abe1a00d0f00\") " pod="openshift-controller-manager/controller-manager-84bddf74b8-qhfnk" Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.016427 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-proxy-ca-bundles\") pod \"controller-manager-84bddf74b8-qhfnk\" (UID: \"affe5b9e-4cfb-451b-9cac-abe1a00d0f00\") " pod="openshift-controller-manager/controller-manager-84bddf74b8-qhfnk" Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.016442 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-client-ca\") pod \"controller-manager-84bddf74b8-qhfnk\" (UID: \"affe5b9e-4cfb-451b-9cac-abe1a00d0f00\") " pod="openshift-controller-manager/controller-manager-84bddf74b8-qhfnk" Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.016480 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66rqp\" (UniqueName: \"kubernetes.io/projected/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-kube-api-access-66rqp\") pod \"controller-manager-84bddf74b8-qhfnk\" (UID: \"affe5b9e-4cfb-451b-9cac-abe1a00d0f00\") " pod="openshift-controller-manager/controller-manager-84bddf74b8-qhfnk" Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.117299 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-proxy-ca-bundles\") pod \"controller-manager-84bddf74b8-qhfnk\" (UID: \"affe5b9e-4cfb-451b-9cac-abe1a00d0f00\") " pod="openshift-controller-manager/controller-manager-84bddf74b8-qhfnk" Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.117383 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-client-ca\") pod \"controller-manager-84bddf74b8-qhfnk\" (UID: \"affe5b9e-4cfb-451b-9cac-abe1a00d0f00\") " pod="openshift-controller-manager/controller-manager-84bddf74b8-qhfnk" Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.117451 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66rqp\" (UniqueName: \"kubernetes.io/projected/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-kube-api-access-66rqp\") pod \"controller-manager-84bddf74b8-qhfnk\" (UID: \"affe5b9e-4cfb-451b-9cac-abe1a00d0f00\") " pod="openshift-controller-manager/controller-manager-84bddf74b8-qhfnk" Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.117494 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-serving-cert\") pod \"controller-manager-84bddf74b8-qhfnk\" (UID: \"affe5b9e-4cfb-451b-9cac-abe1a00d0f00\") " pod="openshift-controller-manager/controller-manager-84bddf74b8-qhfnk" Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.117540 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-config\") pod \"controller-manager-84bddf74b8-qhfnk\" (UID: \"affe5b9e-4cfb-451b-9cac-abe1a00d0f00\") " pod="openshift-controller-manager/controller-manager-84bddf74b8-qhfnk" Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.118438 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-proxy-ca-bundles\") pod \"controller-manager-84bddf74b8-qhfnk\" (UID: \"affe5b9e-4cfb-451b-9cac-abe1a00d0f00\") " pod="openshift-controller-manager/controller-manager-84bddf74b8-qhfnk" Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.118589 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-client-ca\") pod \"controller-manager-84bddf74b8-qhfnk\" (UID: \"affe5b9e-4cfb-451b-9cac-abe1a00d0f00\") " pod="openshift-controller-manager/controller-manager-84bddf74b8-qhfnk" Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.119571 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-config\") pod \"controller-manager-84bddf74b8-qhfnk\" (UID: \"affe5b9e-4cfb-451b-9cac-abe1a00d0f00\") " pod="openshift-controller-manager/controller-manager-84bddf74b8-qhfnk" Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.127957 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-serving-cert\") pod \"controller-manager-84bddf74b8-qhfnk\" (UID: \"affe5b9e-4cfb-451b-9cac-abe1a00d0f00\") " pod="openshift-controller-manager/controller-manager-84bddf74b8-qhfnk" Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.137271 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66rqp\" (UniqueName: \"kubernetes.io/projected/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-kube-api-access-66rqp\") pod \"controller-manager-84bddf74b8-qhfnk\" (UID: \"affe5b9e-4cfb-451b-9cac-abe1a00d0f00\") " pod="openshift-controller-manager/controller-manager-84bddf74b8-qhfnk" Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.158655 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl" Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.239582 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84bddf74b8-qhfnk" Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.319808 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47pdk\" (UniqueName: \"kubernetes.io/projected/d3e11436-0865-4e63-a0c4-65af594ee22b-kube-api-access-47pdk\") pod \"d3e11436-0865-4e63-a0c4-65af594ee22b\" (UID: \"d3e11436-0865-4e63-a0c4-65af594ee22b\") " Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.319943 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3e11436-0865-4e63-a0c4-65af594ee22b-client-ca\") pod \"d3e11436-0865-4e63-a0c4-65af594ee22b\" (UID: \"d3e11436-0865-4e63-a0c4-65af594ee22b\") " Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.320000 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3e11436-0865-4e63-a0c4-65af594ee22b-serving-cert\") pod \"d3e11436-0865-4e63-a0c4-65af594ee22b\" (UID: \"d3e11436-0865-4e63-a0c4-65af594ee22b\") " Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.320021 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3e11436-0865-4e63-a0c4-65af594ee22b-config\") pod \"d3e11436-0865-4e63-a0c4-65af594ee22b\" (UID: \"d3e11436-0865-4e63-a0c4-65af594ee22b\") " Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.321008 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3e11436-0865-4e63-a0c4-65af594ee22b-client-ca" (OuterVolumeSpecName: "client-ca") pod "d3e11436-0865-4e63-a0c4-65af594ee22b" (UID: "d3e11436-0865-4e63-a0c4-65af594ee22b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.321025 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3e11436-0865-4e63-a0c4-65af594ee22b-config" (OuterVolumeSpecName: "config") pod "d3e11436-0865-4e63-a0c4-65af594ee22b" (UID: "d3e11436-0865-4e63-a0c4-65af594ee22b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.324254 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3e11436-0865-4e63-a0c4-65af594ee22b-kube-api-access-47pdk" (OuterVolumeSpecName: "kube-api-access-47pdk") pod "d3e11436-0865-4e63-a0c4-65af594ee22b" (UID: "d3e11436-0865-4e63-a0c4-65af594ee22b"). InnerVolumeSpecName "kube-api-access-47pdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.324983 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3e11436-0865-4e63-a0c4-65af594ee22b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d3e11436-0865-4e63-a0c4-65af594ee22b" (UID: "d3e11436-0865-4e63-a0c4-65af594ee22b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.364086 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b7b4d68-3228-4b25-b33a-02b363b9e8b6" path="/var/lib/kubelet/pods/2b7b4d68-3228-4b25-b33a-02b363b9e8b6/volumes" Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.421310 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47pdk\" (UniqueName: \"kubernetes.io/projected/d3e11436-0865-4e63-a0c4-65af594ee22b-kube-api-access-47pdk\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.421363 4923 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3e11436-0865-4e63-a0c4-65af594ee22b-client-ca\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.421375 4923 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3e11436-0865-4e63-a0c4-65af594ee22b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.421385 4923 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3e11436-0865-4e63-a0c4-65af594ee22b-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.568068 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl" event={"ID":"d3e11436-0865-4e63-a0c4-65af594ee22b","Type":"ContainerDied","Data":"456c4f22cd58becce48ad2b95c1689fef2e5ea1e827bee951e3b60f0cea4a788"} Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.568119 4923 scope.go:117] "RemoveContainer" containerID="ed9ed7b1c87ded588a6bf1b1576c7fbb891bc20194ed6ec8c3acea61b8da226a" Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.568235 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl" Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.598449 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl"] Mar 21 04:20:34 crc kubenswrapper[4923]: I0321 04:20:34.600979 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b7956ff5-6zqdl"] Mar 21 04:20:35 crc kubenswrapper[4923]: I0321 04:20:35.917932 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz"] Mar 21 04:20:35 crc kubenswrapper[4923]: E0321 04:20:35.921925 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3e11436-0865-4e63-a0c4-65af594ee22b" containerName="route-controller-manager" Mar 21 04:20:35 crc kubenswrapper[4923]: I0321 04:20:35.921943 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3e11436-0865-4e63-a0c4-65af594ee22b" containerName="route-controller-manager" Mar 21 04:20:35 crc kubenswrapper[4923]: I0321 04:20:35.922097 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3e11436-0865-4e63-a0c4-65af594ee22b" containerName="route-controller-manager" Mar 21 04:20:35 crc kubenswrapper[4923]: I0321 04:20:35.922582 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz" Mar 21 04:20:35 crc kubenswrapper[4923]: I0321 04:20:35.924677 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 21 04:20:35 crc kubenswrapper[4923]: I0321 04:20:35.924907 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 21 04:20:35 crc kubenswrapper[4923]: I0321 04:20:35.925070 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 21 04:20:35 crc kubenswrapper[4923]: I0321 04:20:35.925354 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 21 04:20:35 crc kubenswrapper[4923]: I0321 04:20:35.925533 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 21 04:20:35 crc kubenswrapper[4923]: I0321 04:20:35.925645 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 21 04:20:35 crc kubenswrapper[4923]: I0321 04:20:35.929612 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz"] Mar 21 04:20:35 crc kubenswrapper[4923]: I0321 04:20:35.939450 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e0e190-a5d3-4693-8876-815ea336a40c-config\") pod \"route-controller-manager-6799589d9f-jw8hz\" (UID: \"84e0e190-a5d3-4693-8876-815ea336a40c\") " pod="openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz" Mar 21 04:20:35 crc kubenswrapper[4923]: I0321 04:20:35.939544 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84e0e190-a5d3-4693-8876-815ea336a40c-client-ca\") pod \"route-controller-manager-6799589d9f-jw8hz\" (UID: \"84e0e190-a5d3-4693-8876-815ea336a40c\") " pod="openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz" Mar 21 04:20:35 crc kubenswrapper[4923]: I0321 04:20:35.939585 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99wgn\" (UniqueName: \"kubernetes.io/projected/84e0e190-a5d3-4693-8876-815ea336a40c-kube-api-access-99wgn\") pod \"route-controller-manager-6799589d9f-jw8hz\" (UID: \"84e0e190-a5d3-4693-8876-815ea336a40c\") " pod="openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz" Mar 21 04:20:35 crc kubenswrapper[4923]: I0321 04:20:35.939638 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84e0e190-a5d3-4693-8876-815ea336a40c-serving-cert\") pod \"route-controller-manager-6799589d9f-jw8hz\" (UID: \"84e0e190-a5d3-4693-8876-815ea336a40c\") " pod="openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz" Mar 21 04:20:36 crc kubenswrapper[4923]: I0321 04:20:36.040854 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84e0e190-a5d3-4693-8876-815ea336a40c-serving-cert\") pod \"route-controller-manager-6799589d9f-jw8hz\" (UID: \"84e0e190-a5d3-4693-8876-815ea336a40c\") " pod="openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz" Mar 21 04:20:36 crc kubenswrapper[4923]: I0321 04:20:36.041010 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e0e190-a5d3-4693-8876-815ea336a40c-config\") pod \"route-controller-manager-6799589d9f-jw8hz\" (UID: \"84e0e190-a5d3-4693-8876-815ea336a40c\") " pod="openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz" Mar 21 04:20:36 crc kubenswrapper[4923]: I0321 04:20:36.041082 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84e0e190-a5d3-4693-8876-815ea336a40c-client-ca\") pod \"route-controller-manager-6799589d9f-jw8hz\" (UID: \"84e0e190-a5d3-4693-8876-815ea336a40c\") " pod="openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz" Mar 21 04:20:36 crc kubenswrapper[4923]: I0321 04:20:36.041139 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99wgn\" (UniqueName: \"kubernetes.io/projected/84e0e190-a5d3-4693-8876-815ea336a40c-kube-api-access-99wgn\") pod \"route-controller-manager-6799589d9f-jw8hz\" (UID: \"84e0e190-a5d3-4693-8876-815ea336a40c\") " pod="openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz" Mar 21 04:20:36 crc kubenswrapper[4923]: I0321 04:20:36.042491 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e0e190-a5d3-4693-8876-815ea336a40c-config\") pod \"route-controller-manager-6799589d9f-jw8hz\" (UID: \"84e0e190-a5d3-4693-8876-815ea336a40c\") " pod="openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz" Mar 21 04:20:36 crc kubenswrapper[4923]: I0321 04:20:36.042933 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84e0e190-a5d3-4693-8876-815ea336a40c-client-ca\") pod \"route-controller-manager-6799589d9f-jw8hz\" (UID: \"84e0e190-a5d3-4693-8876-815ea336a40c\") " pod="openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz" Mar 21 04:20:36 crc kubenswrapper[4923]: I0321 04:20:36.046543 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84e0e190-a5d3-4693-8876-815ea336a40c-serving-cert\") pod \"route-controller-manager-6799589d9f-jw8hz\" (UID: \"84e0e190-a5d3-4693-8876-815ea336a40c\") " pod="openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz" Mar 21 04:20:36 crc kubenswrapper[4923]: I0321 04:20:36.074093 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99wgn\" (UniqueName: \"kubernetes.io/projected/84e0e190-a5d3-4693-8876-815ea336a40c-kube-api-access-99wgn\") pod \"route-controller-manager-6799589d9f-jw8hz\" (UID: \"84e0e190-a5d3-4693-8876-815ea336a40c\") " pod="openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz" Mar 21 04:20:36 crc kubenswrapper[4923]: I0321 04:20:36.239748 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz" Mar 21 04:20:36 crc kubenswrapper[4923]: I0321 04:20:36.365998 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3e11436-0865-4e63-a0c4-65af594ee22b" path="/var/lib/kubelet/pods/d3e11436-0865-4e63-a0c4-65af594ee22b/volumes" Mar 21 04:20:36 crc kubenswrapper[4923]: I0321 04:20:36.511774 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hgbdf" Mar 21 04:20:36 crc kubenswrapper[4923]: I0321 04:20:36.513502 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hgbdf" Mar 21 04:20:36 crc kubenswrapper[4923]: I0321 04:20:36.964900 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hgbdf" Mar 21 04:20:37 crc kubenswrapper[4923]: I0321 04:20:37.547649 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lcp5h" Mar 21 04:20:37 crc kubenswrapper[4923]: I0321 04:20:37.547702 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lcp5h" Mar 21 04:20:37 crc kubenswrapper[4923]: I0321 04:20:37.577681 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84bddf74b8-qhfnk"] Mar 21 04:20:37 crc kubenswrapper[4923]: W0321 04:20:37.597111 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaffe5b9e_4cfb_451b_9cac_abe1a00d0f00.slice/crio-8979b694bd38a2c063283c9dab438d75e3f23391685710974c483bd38d276de7 WatchSource:0}: Error finding container 8979b694bd38a2c063283c9dab438d75e3f23391685710974c483bd38d276de7: Status 404 returned error can't find the container with id 8979b694bd38a2c063283c9dab438d75e3f23391685710974c483bd38d276de7 Mar 21 04:20:37 crc kubenswrapper[4923]: I0321 04:20:37.623428 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz"] Mar 21 04:20:37 crc kubenswrapper[4923]: W0321 04:20:37.630349 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84e0e190_a5d3_4693_8876_815ea336a40c.slice/crio-2442a8d11bab1fe11b4a553ee678ae55a9e40b49dfdba3e6b752e93ad9147f87 WatchSource:0}: Error finding container 2442a8d11bab1fe11b4a553ee678ae55a9e40b49dfdba3e6b752e93ad9147f87: Status 404 returned error can't find the container with id 2442a8d11bab1fe11b4a553ee678ae55a9e40b49dfdba3e6b752e93ad9147f87 Mar 21 04:20:37 crc kubenswrapper[4923]: I0321 04:20:37.665633 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hgbdf" Mar 21 04:20:37 crc kubenswrapper[4923]: E0321 04:20:37.738788 4923 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod877384d0_10ed_4fca_a600_e8fa69a85648.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4b3225e_19a4_40c0_b9fe_b93566ed64e9.slice\": RecentStats: unable to find data in memory cache]" Mar 21 04:20:37 crc kubenswrapper[4923]: I0321 04:20:37.945356 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v7tdq" Mar 21 04:20:37 crc kubenswrapper[4923]: I0321 04:20:37.946183 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v7tdq" Mar 21 04:20:38 crc kubenswrapper[4923]: I0321 04:20:38.594153 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hgbdf"] Mar 21 04:20:38 crc kubenswrapper[4923]: I0321 04:20:38.601525 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m57bq" event={"ID":"e8d52ff5-e444-4d7d-952f-6d95888a7791","Type":"ContainerStarted","Data":"1510692a0afe19b1c0959f4ba675af3e189d5c3ad770df05a20ffb018cbd7a3f"} Mar 21 04:20:38 crc kubenswrapper[4923]: I0321 04:20:38.602865 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz" event={"ID":"84e0e190-a5d3-4693-8876-815ea336a40c","Type":"ContainerStarted","Data":"fcb0a6b08c35289fe51e0ae3ce6329d2a40c428e128f7069a5be4b83db883b4a"} Mar 21 04:20:38 crc kubenswrapper[4923]: I0321 04:20:38.602900 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz" event={"ID":"84e0e190-a5d3-4693-8876-815ea336a40c","Type":"ContainerStarted","Data":"2442a8d11bab1fe11b4a553ee678ae55a9e40b49dfdba3e6b752e93ad9147f87"} Mar 21 04:20:38 crc kubenswrapper[4923]: I0321 04:20:38.606070 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hq8bw" event={"ID":"3dd7868c-7d3e-43de-b0f9-1ab51280fce5","Type":"ContainerStarted","Data":"4450280769cf12c05c407f2ca63cd72407151373d7c5a736fe690d61646ef607"} Mar 21 04:20:38 crc kubenswrapper[4923]: I0321 04:20:38.610958 4923 generic.go:334] "Generic (PLEG): container finished" podID="dd250302-91b0-41c4-b138-89559a78d375" containerID="88354b6394a1c348f8e9b82d3a6a1b70c22ee2289e53e9875d06e7bcd66625f4" exitCode=0 Mar 21 04:20:38 crc kubenswrapper[4923]: I0321 04:20:38.611019 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9grdl" event={"ID":"dd250302-91b0-41c4-b138-89559a78d375","Type":"ContainerDied","Data":"88354b6394a1c348f8e9b82d3a6a1b70c22ee2289e53e9875d06e7bcd66625f4"} Mar 21 04:20:38 crc kubenswrapper[4923]: I0321 04:20:38.612486 4923 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lcp5h" podUID="ded4d513-cc92-405c-8009-913f9aa7ea5f" containerName="registry-server" probeResult="failure" output=< Mar 21 04:20:38 crc kubenswrapper[4923]: timeout: failed to connect service ":50051" within 1s Mar 21 04:20:38 crc kubenswrapper[4923]: > Mar 21 04:20:38 crc kubenswrapper[4923]: I0321 04:20:38.616731 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5twmg" event={"ID":"3b1d4e5a-6b46-4203-ba69-6440844e48ad","Type":"ContainerStarted","Data":"832ae6b4a689235e2e1a8930d72538fd8d2e1bcf385eca0fbfcb2159a5a78acd"} Mar 21 04:20:38 crc kubenswrapper[4923]: I0321 04:20:38.618274 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84bddf74b8-qhfnk" event={"ID":"affe5b9e-4cfb-451b-9cac-abe1a00d0f00","Type":"ContainerStarted","Data":"08b4240bc379c4c82a86a447e0f2b23babccefd848128c410f3d9c0fd402261f"} Mar 21 04:20:38 crc kubenswrapper[4923]: I0321 04:20:38.618312 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84bddf74b8-qhfnk" event={"ID":"affe5b9e-4cfb-451b-9cac-abe1a00d0f00","Type":"ContainerStarted","Data":"8979b694bd38a2c063283c9dab438d75e3f23391685710974c483bd38d276de7"} Mar 21 04:20:38 crc kubenswrapper[4923]: I0321 04:20:38.620924 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t2p5d" event={"ID":"d0e9b7e6-79e7-47f0-a488-182df8bb166e","Type":"ContainerStarted","Data":"7e3be2cf0b9ed9d9cb2d8b7bdcbcc70e7b8626076e22ca2c303767c728c32c16"} Mar 21 04:20:39 crc kubenswrapper[4923]: I0321 04:20:39.004896 4923 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v7tdq" podUID="44513839-d352-4720-a552-bc11c6030391" containerName="registry-server" probeResult="failure" output=< Mar 21 04:20:39 crc kubenswrapper[4923]: timeout: failed to connect service ":50051" within 1s Mar 21 04:20:39 crc kubenswrapper[4923]: > Mar 21 04:20:39 crc kubenswrapper[4923]: I0321 04:20:39.628313 4923 generic.go:334] "Generic (PLEG): container finished" podID="e8d52ff5-e444-4d7d-952f-6d95888a7791" containerID="1510692a0afe19b1c0959f4ba675af3e189d5c3ad770df05a20ffb018cbd7a3f" exitCode=0 Mar 21 04:20:39 crc kubenswrapper[4923]: I0321 04:20:39.628495 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m57bq" event={"ID":"e8d52ff5-e444-4d7d-952f-6d95888a7791","Type":"ContainerDied","Data":"1510692a0afe19b1c0959f4ba675af3e189d5c3ad770df05a20ffb018cbd7a3f"} Mar 21 04:20:39 crc kubenswrapper[4923]: I0321 04:20:39.629497 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-84bddf74b8-qhfnk" Mar 21 04:20:39 crc kubenswrapper[4923]: I0321 04:20:39.629931 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hgbdf" podUID="85f7800b-eaa8-45bd-95b4-ee4885cadf52" containerName="registry-server" containerID="cri-o://ef75080c25fb7d9d3207438cf12743c3b72b585e9c65cff6fac5646a8e86b910" gracePeriod=2 Mar 21 04:20:39 crc kubenswrapper[4923]: I0321 04:20:39.635592 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-84bddf74b8-qhfnk" Mar 21 04:20:39 crc kubenswrapper[4923]: I0321 04:20:39.653752 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-84bddf74b8-qhfnk" podStartSLOduration=7.653727934 podStartE2EDuration="7.653727934s" podCreationTimestamp="2026-03-21 04:20:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:20:39.650106262 +0000 UTC m=+204.803117359" watchObservedRunningTime="2026-03-21 04:20:39.653727934 +0000 UTC m=+204.806739011" Mar 21 04:20:39 crc kubenswrapper[4923]: I0321 04:20:39.668858 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t2p5d" podStartSLOduration=4.82348735 podStartE2EDuration="1m5.66884224s" podCreationTimestamp="2026-03-21 04:19:34 +0000 UTC" firstStartedPulling="2026-03-21 04:19:35.665470825 +0000 UTC m=+140.818481902" lastFinishedPulling="2026-03-21 04:20:36.510825705 +0000 UTC m=+201.663836792" observedRunningTime="2026-03-21 04:20:39.665764325 +0000 UTC m=+204.818775412" watchObservedRunningTime="2026-03-21 04:20:39.66884224 +0000 UTC m=+204.821853327" Mar 21 04:20:39 crc kubenswrapper[4923]: I0321 04:20:39.747822 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hq8bw" podStartSLOduration=5.193082046 podStartE2EDuration="1m6.747803214s" podCreationTimestamp="2026-03-21 04:19:33 +0000 UTC" firstStartedPulling="2026-03-21 04:19:35.691791242 +0000 UTC m=+140.844802329" lastFinishedPulling="2026-03-21 04:20:37.24651241 +0000 UTC m=+202.399523497" observedRunningTime="2026-03-21 04:20:39.745305957 +0000 UTC m=+204.898317044" watchObservedRunningTime="2026-03-21 04:20:39.747803214 +0000 UTC m=+204.900814301" Mar 21 04:20:39 crc kubenswrapper[4923]: I0321 04:20:39.786707 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz" podStartSLOduration=7.786678772 podStartE2EDuration="7.786678772s" podCreationTimestamp="2026-03-21 04:20:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:20:39.783350079 +0000 UTC m=+204.936361166" watchObservedRunningTime="2026-03-21 04:20:39.786678772 +0000 UTC m=+204.939689859" Mar 21 04:20:39 crc kubenswrapper[4923]: I0321 04:20:39.787939 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5twmg" podStartSLOduration=7.432606868 podStartE2EDuration="1m4.78792048s" podCreationTimestamp="2026-03-21 04:19:35 +0000 UTC" firstStartedPulling="2026-03-21 04:19:37.848621072 +0000 UTC m=+143.001632159" lastFinishedPulling="2026-03-21 04:20:35.203934684 +0000 UTC m=+200.356945771" observedRunningTime="2026-03-21 04:20:39.76780407 +0000 UTC m=+204.920815157" watchObservedRunningTime="2026-03-21 04:20:39.78792048 +0000 UTC m=+204.940931577" Mar 21 04:20:41 crc kubenswrapper[4923]: I0321 04:20:41.641534 4923 generic.go:334] "Generic (PLEG): container finished" podID="85f7800b-eaa8-45bd-95b4-ee4885cadf52" containerID="ef75080c25fb7d9d3207438cf12743c3b72b585e9c65cff6fac5646a8e86b910" exitCode=0 Mar 21 04:20:41 crc kubenswrapper[4923]: I0321 04:20:41.641614 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hgbdf" event={"ID":"85f7800b-eaa8-45bd-95b4-ee4885cadf52","Type":"ContainerDied","Data":"ef75080c25fb7d9d3207438cf12743c3b72b585e9c65cff6fac5646a8e86b910"} Mar 21 04:20:42 crc kubenswrapper[4923]: I0321 04:20:42.069773 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hgbdf" Mar 21 04:20:42 crc kubenswrapper[4923]: I0321 04:20:42.146888 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85f7800b-eaa8-45bd-95b4-ee4885cadf52-utilities\") pod \"85f7800b-eaa8-45bd-95b4-ee4885cadf52\" (UID: \"85f7800b-eaa8-45bd-95b4-ee4885cadf52\") " Mar 21 04:20:42 crc kubenswrapper[4923]: I0321 04:20:42.146939 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zqmh\" (UniqueName: \"kubernetes.io/projected/85f7800b-eaa8-45bd-95b4-ee4885cadf52-kube-api-access-4zqmh\") pod \"85f7800b-eaa8-45bd-95b4-ee4885cadf52\" (UID: \"85f7800b-eaa8-45bd-95b4-ee4885cadf52\") " Mar 21 04:20:42 crc kubenswrapper[4923]: I0321 04:20:42.146997 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85f7800b-eaa8-45bd-95b4-ee4885cadf52-catalog-content\") pod \"85f7800b-eaa8-45bd-95b4-ee4885cadf52\" (UID: \"85f7800b-eaa8-45bd-95b4-ee4885cadf52\") " Mar 21 04:20:42 crc kubenswrapper[4923]: I0321 04:20:42.148415 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85f7800b-eaa8-45bd-95b4-ee4885cadf52-utilities" (OuterVolumeSpecName: "utilities") pod "85f7800b-eaa8-45bd-95b4-ee4885cadf52" (UID: "85f7800b-eaa8-45bd-95b4-ee4885cadf52"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:20:42 crc kubenswrapper[4923]: I0321 04:20:42.152943 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85f7800b-eaa8-45bd-95b4-ee4885cadf52-kube-api-access-4zqmh" (OuterVolumeSpecName: "kube-api-access-4zqmh") pod "85f7800b-eaa8-45bd-95b4-ee4885cadf52" (UID: "85f7800b-eaa8-45bd-95b4-ee4885cadf52"). InnerVolumeSpecName "kube-api-access-4zqmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:20:42 crc kubenswrapper[4923]: I0321 04:20:42.172724 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85f7800b-eaa8-45bd-95b4-ee4885cadf52-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "85f7800b-eaa8-45bd-95b4-ee4885cadf52" (UID: "85f7800b-eaa8-45bd-95b4-ee4885cadf52"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:20:42 crc kubenswrapper[4923]: I0321 04:20:42.248043 4923 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85f7800b-eaa8-45bd-95b4-ee4885cadf52-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:42 crc kubenswrapper[4923]: I0321 04:20:42.248080 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zqmh\" (UniqueName: \"kubernetes.io/projected/85f7800b-eaa8-45bd-95b4-ee4885cadf52-kube-api-access-4zqmh\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:42 crc kubenswrapper[4923]: I0321 04:20:42.248092 4923 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85f7800b-eaa8-45bd-95b4-ee4885cadf52-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:42 crc kubenswrapper[4923]: I0321 04:20:42.655409 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hgbdf" event={"ID":"85f7800b-eaa8-45bd-95b4-ee4885cadf52","Type":"ContainerDied","Data":"530eae2f61f161bca3218ff62027a80d8e83e9e8d3a64e5916ac4f8d9017cea5"} Mar 21 04:20:42 crc kubenswrapper[4923]: I0321 04:20:42.655538 4923 scope.go:117] "RemoveContainer" containerID="ef75080c25fb7d9d3207438cf12743c3b72b585e9c65cff6fac5646a8e86b910" Mar 21 04:20:42 crc kubenswrapper[4923]: I0321 04:20:42.655548 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hgbdf" Mar 21 04:20:42 crc kubenswrapper[4923]: I0321 04:20:42.682192 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hgbdf"] Mar 21 04:20:42 crc kubenswrapper[4923]: I0321 04:20:42.686505 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hgbdf"] Mar 21 04:20:42 crc kubenswrapper[4923]: I0321 04:20:42.701173 4923 scope.go:117] "RemoveContainer" containerID="1f654587163751d425bafcfd05d9df6befa8daec49704778cbb991391feba002" Mar 21 04:20:43 crc kubenswrapper[4923]: I0321 04:20:43.127041 4923 scope.go:117] "RemoveContainer" containerID="da55dd6610e5bbbc4340d264c50c5066c84dbe6d1a1c4de5449c7a1505212792" Mar 21 04:20:44 crc kubenswrapper[4923]: I0321 04:20:44.141564 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hq8bw" Mar 21 04:20:44 crc kubenswrapper[4923]: I0321 04:20:44.142026 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hq8bw" Mar 21 04:20:44 crc kubenswrapper[4923]: I0321 04:20:44.190418 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hq8bw" Mar 21 04:20:44 crc kubenswrapper[4923]: I0321 04:20:44.383791 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85f7800b-eaa8-45bd-95b4-ee4885cadf52" path="/var/lib/kubelet/pods/85f7800b-eaa8-45bd-95b4-ee4885cadf52/volumes" Mar 21 04:20:44 crc kubenswrapper[4923]: I0321 04:20:44.672707 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m57bq" event={"ID":"e8d52ff5-e444-4d7d-952f-6d95888a7791","Type":"ContainerStarted","Data":"4da8c14e907bb9e82ae352f621c5ae8bee61b898cb1217698674f4ee753d987d"} Mar 21 04:20:44 crc kubenswrapper[4923]: I0321 04:20:44.674924 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9grdl" event={"ID":"dd250302-91b0-41c4-b138-89559a78d375","Type":"ContainerStarted","Data":"9ed588912e23d419b70fe410dea7a0b562a3006d54a3392b4b08b607411ee39c"} Mar 21 04:20:44 crc kubenswrapper[4923]: I0321 04:20:44.695891 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m57bq" podStartSLOduration=2.267495599 podStartE2EDuration="1m10.695873641s" podCreationTimestamp="2026-03-21 04:19:34 +0000 UTC" firstStartedPulling="2026-03-21 04:19:35.68870832 +0000 UTC m=+140.841719407" lastFinishedPulling="2026-03-21 04:20:44.117086312 +0000 UTC m=+209.270097449" observedRunningTime="2026-03-21 04:20:44.695338315 +0000 UTC m=+209.848349402" watchObservedRunningTime="2026-03-21 04:20:44.695873641 +0000 UTC m=+209.848884728" Mar 21 04:20:44 crc kubenswrapper[4923]: I0321 04:20:44.717619 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9grdl" podStartSLOduration=4.238180965 podStartE2EDuration="1m11.717603031s" podCreationTimestamp="2026-03-21 04:19:33 +0000 UTC" firstStartedPulling="2026-03-21 04:19:35.647790596 +0000 UTC m=+140.800801683" lastFinishedPulling="2026-03-21 04:20:43.127212622 +0000 UTC m=+208.280223749" observedRunningTime="2026-03-21 04:20:44.71593056 +0000 UTC m=+209.868941657" watchObservedRunningTime="2026-03-21 04:20:44.717603031 +0000 UTC m=+209.870614118" Mar 21 04:20:44 crc kubenswrapper[4923]: I0321 04:20:44.721857 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t2p5d" Mar 21 04:20:44 crc kubenswrapper[4923]: I0321 04:20:44.722286 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t2p5d" Mar 21 04:20:44 crc kubenswrapper[4923]: I0321 04:20:44.725866 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hq8bw" Mar 21 04:20:44 crc kubenswrapper[4923]: I0321 04:20:44.783661 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t2p5d" Mar 21 04:20:45 crc kubenswrapper[4923]: I0321 04:20:45.732836 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t2p5d" Mar 21 04:20:46 crc kubenswrapper[4923]: I0321 04:20:46.103751 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5twmg" Mar 21 04:20:46 crc kubenswrapper[4923]: I0321 04:20:46.103802 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5twmg" Mar 21 04:20:46 crc kubenswrapper[4923]: I0321 04:20:46.155058 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5twmg" Mar 21 04:20:46 crc kubenswrapper[4923]: I0321 04:20:46.241026 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz" Mar 21 04:20:46 crc kubenswrapper[4923]: I0321 04:20:46.247665 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz" Mar 21 04:20:46 crc kubenswrapper[4923]: I0321 04:20:46.714360 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5jpr6"] Mar 21 04:20:46 crc kubenswrapper[4923]: I0321 04:20:46.828759 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5twmg" Mar 21 04:20:47 crc kubenswrapper[4923]: I0321 04:20:47.596200 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lcp5h" Mar 21 04:20:47 crc kubenswrapper[4923]: I0321 04:20:47.638185 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lcp5h" Mar 21 04:20:47 crc kubenswrapper[4923]: E0321 04:20:47.908437 4923 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4b3225e_19a4_40c0_b9fe_b93566ed64e9.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod877384d0_10ed_4fca_a600_e8fa69a85648.slice\": RecentStats: unable to find data in memory cache]" Mar 21 04:20:47 crc kubenswrapper[4923]: I0321 04:20:47.994643 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v7tdq" Mar 21 04:20:48 crc kubenswrapper[4923]: I0321 04:20:48.035434 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v7tdq" Mar 21 04:20:48 crc kubenswrapper[4923]: I0321 04:20:48.796392 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t2p5d"] Mar 21 04:20:48 crc kubenswrapper[4923]: I0321 04:20:48.796771 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t2p5d" podUID="d0e9b7e6-79e7-47f0-a488-182df8bb166e" containerName="registry-server" containerID="cri-o://7e3be2cf0b9ed9d9cb2d8b7bdcbcc70e7b8626076e22ca2c303767c728c32c16" gracePeriod=2 Mar 21 04:20:49 crc kubenswrapper[4923]: I0321 04:20:49.767169 4923 generic.go:334] "Generic (PLEG): container finished" podID="d0e9b7e6-79e7-47f0-a488-182df8bb166e" containerID="7e3be2cf0b9ed9d9cb2d8b7bdcbcc70e7b8626076e22ca2c303767c728c32c16" exitCode=0 Mar 21 04:20:49 crc kubenswrapper[4923]: I0321 04:20:49.767248 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t2p5d" event={"ID":"d0e9b7e6-79e7-47f0-a488-182df8bb166e","Type":"ContainerDied","Data":"7e3be2cf0b9ed9d9cb2d8b7bdcbcc70e7b8626076e22ca2c303767c728c32c16"} Mar 21 04:20:49 crc kubenswrapper[4923]: I0321 04:20:49.837372 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t2p5d" Mar 21 04:20:49 crc kubenswrapper[4923]: I0321 04:20:49.871524 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpt67\" (UniqueName: \"kubernetes.io/projected/d0e9b7e6-79e7-47f0-a488-182df8bb166e-kube-api-access-hpt67\") pod \"d0e9b7e6-79e7-47f0-a488-182df8bb166e\" (UID: \"d0e9b7e6-79e7-47f0-a488-182df8bb166e\") " Mar 21 04:20:49 crc kubenswrapper[4923]: I0321 04:20:49.871583 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0e9b7e6-79e7-47f0-a488-182df8bb166e-utilities\") pod \"d0e9b7e6-79e7-47f0-a488-182df8bb166e\" (UID: \"d0e9b7e6-79e7-47f0-a488-182df8bb166e\") " Mar 21 04:20:49 crc kubenswrapper[4923]: I0321 04:20:49.871637 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0e9b7e6-79e7-47f0-a488-182df8bb166e-catalog-content\") pod \"d0e9b7e6-79e7-47f0-a488-182df8bb166e\" (UID: \"d0e9b7e6-79e7-47f0-a488-182df8bb166e\") " Mar 21 04:20:49 crc kubenswrapper[4923]: I0321 04:20:49.872670 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0e9b7e6-79e7-47f0-a488-182df8bb166e-utilities" (OuterVolumeSpecName: "utilities") pod "d0e9b7e6-79e7-47f0-a488-182df8bb166e" (UID: "d0e9b7e6-79e7-47f0-a488-182df8bb166e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:20:49 crc kubenswrapper[4923]: I0321 04:20:49.878528 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0e9b7e6-79e7-47f0-a488-182df8bb166e-kube-api-access-hpt67" (OuterVolumeSpecName: "kube-api-access-hpt67") pod "d0e9b7e6-79e7-47f0-a488-182df8bb166e" (UID: "d0e9b7e6-79e7-47f0-a488-182df8bb166e"). InnerVolumeSpecName "kube-api-access-hpt67". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:20:49 crc kubenswrapper[4923]: I0321 04:20:49.924532 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0e9b7e6-79e7-47f0-a488-182df8bb166e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d0e9b7e6-79e7-47f0-a488-182df8bb166e" (UID: "d0e9b7e6-79e7-47f0-a488-182df8bb166e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:20:49 crc kubenswrapper[4923]: I0321 04:20:49.972708 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpt67\" (UniqueName: \"kubernetes.io/projected/d0e9b7e6-79e7-47f0-a488-182df8bb166e-kube-api-access-hpt67\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:49 crc kubenswrapper[4923]: I0321 04:20:49.972741 4923 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0e9b7e6-79e7-47f0-a488-182df8bb166e-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:49 crc kubenswrapper[4923]: I0321 04:20:49.972752 4923 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0e9b7e6-79e7-47f0-a488-182df8bb166e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:50 crc kubenswrapper[4923]: I0321 04:20:50.778988 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t2p5d" event={"ID":"d0e9b7e6-79e7-47f0-a488-182df8bb166e","Type":"ContainerDied","Data":"65eb9c6c00a9a7afb7a1f6331700c86053349040e7f3be99a9cd901e4cca1c3f"} Mar 21 04:20:50 crc kubenswrapper[4923]: I0321 04:20:50.779091 4923 scope.go:117] "RemoveContainer" containerID="7e3be2cf0b9ed9d9cb2d8b7bdcbcc70e7b8626076e22ca2c303767c728c32c16" Mar 21 04:20:50 crc kubenswrapper[4923]: I0321 04:20:50.779104 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t2p5d" Mar 21 04:20:50 crc kubenswrapper[4923]: I0321 04:20:50.811883 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t2p5d"] Mar 21 04:20:50 crc kubenswrapper[4923]: I0321 04:20:50.817424 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t2p5d"] Mar 21 04:20:50 crc kubenswrapper[4923]: I0321 04:20:50.818637 4923 scope.go:117] "RemoveContainer" containerID="4e6eea6e579b008a25a73cc45d531895353c502bb2d7dda375fdbc14ffa2b45a" Mar 21 04:20:50 crc kubenswrapper[4923]: I0321 04:20:50.848032 4923 scope.go:117] "RemoveContainer" containerID="2055db5135fab7f76bfea1af811a0c8c593abf8278c2515782087ad1667eb621" Mar 21 04:20:51 crc kubenswrapper[4923]: I0321 04:20:51.207068 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v7tdq"] Mar 21 04:20:51 crc kubenswrapper[4923]: I0321 04:20:51.207758 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v7tdq" podUID="44513839-d352-4720-a552-bc11c6030391" containerName="registry-server" containerID="cri-o://c88da9b680adeecf068170339af886890e0774f7272ad159acfedfd05c754dd1" gracePeriod=2 Mar 21 04:20:51 crc kubenswrapper[4923]: I0321 04:20:51.791190 4923 generic.go:334] "Generic (PLEG): container finished" podID="44513839-d352-4720-a552-bc11c6030391" containerID="c88da9b680adeecf068170339af886890e0774f7272ad159acfedfd05c754dd1" exitCode=0 Mar 21 04:20:51 crc kubenswrapper[4923]: I0321 04:20:51.791249 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v7tdq" event={"ID":"44513839-d352-4720-a552-bc11c6030391","Type":"ContainerDied","Data":"c88da9b680adeecf068170339af886890e0774f7272ad159acfedfd05c754dd1"} Mar 21 04:20:51 crc kubenswrapper[4923]: I0321 04:20:51.791287 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v7tdq" event={"ID":"44513839-d352-4720-a552-bc11c6030391","Type":"ContainerDied","Data":"7f89a8eed38723e994c831923b0f527a6bf50c59c3cadd34e4284f4019de20be"} Mar 21 04:20:51 crc kubenswrapper[4923]: I0321 04:20:51.791305 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f89a8eed38723e994c831923b0f527a6bf50c59c3cadd34e4284f4019de20be" Mar 21 04:20:51 crc kubenswrapper[4923]: I0321 04:20:51.791375 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v7tdq" Mar 21 04:20:51 crc kubenswrapper[4923]: I0321 04:20:51.902803 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44513839-d352-4720-a552-bc11c6030391-utilities\") pod \"44513839-d352-4720-a552-bc11c6030391\" (UID: \"44513839-d352-4720-a552-bc11c6030391\") " Mar 21 04:20:51 crc kubenswrapper[4923]: I0321 04:20:51.902873 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44513839-d352-4720-a552-bc11c6030391-catalog-content\") pod \"44513839-d352-4720-a552-bc11c6030391\" (UID: \"44513839-d352-4720-a552-bc11c6030391\") " Mar 21 04:20:51 crc kubenswrapper[4923]: I0321 04:20:51.902943 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6j8wc\" (UniqueName: \"kubernetes.io/projected/44513839-d352-4720-a552-bc11c6030391-kube-api-access-6j8wc\") pod \"44513839-d352-4720-a552-bc11c6030391\" (UID: \"44513839-d352-4720-a552-bc11c6030391\") " Mar 21 04:20:51 crc kubenswrapper[4923]: I0321 04:20:51.903837 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44513839-d352-4720-a552-bc11c6030391-utilities" (OuterVolumeSpecName: "utilities") pod "44513839-d352-4720-a552-bc11c6030391" (UID: "44513839-d352-4720-a552-bc11c6030391"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:20:51 crc kubenswrapper[4923]: I0321 04:20:51.907965 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44513839-d352-4720-a552-bc11c6030391-kube-api-access-6j8wc" (OuterVolumeSpecName: "kube-api-access-6j8wc") pod "44513839-d352-4720-a552-bc11c6030391" (UID: "44513839-d352-4720-a552-bc11c6030391"). InnerVolumeSpecName "kube-api-access-6j8wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:20:52 crc kubenswrapper[4923]: I0321 04:20:52.004706 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6j8wc\" (UniqueName: \"kubernetes.io/projected/44513839-d352-4720-a552-bc11c6030391-kube-api-access-6j8wc\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:52 crc kubenswrapper[4923]: I0321 04:20:52.004737 4923 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44513839-d352-4720-a552-bc11c6030391-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:52 crc kubenswrapper[4923]: I0321 04:20:52.032500 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44513839-d352-4720-a552-bc11c6030391-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44513839-d352-4720-a552-bc11c6030391" (UID: "44513839-d352-4720-a552-bc11c6030391"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:20:52 crc kubenswrapper[4923]: I0321 04:20:52.105574 4923 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44513839-d352-4720-a552-bc11c6030391-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:52 crc kubenswrapper[4923]: I0321 04:20:52.370692 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0e9b7e6-79e7-47f0-a488-182df8bb166e" path="/var/lib/kubelet/pods/d0e9b7e6-79e7-47f0-a488-182df8bb166e/volumes" Mar 21 04:20:52 crc kubenswrapper[4923]: I0321 04:20:52.772381 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84bddf74b8-qhfnk"] Mar 21 04:20:52 crc kubenswrapper[4923]: I0321 04:20:52.772736 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-84bddf74b8-qhfnk" podUID="affe5b9e-4cfb-451b-9cac-abe1a00d0f00" containerName="controller-manager" containerID="cri-o://08b4240bc379c4c82a86a447e0f2b23babccefd848128c410f3d9c0fd402261f" gracePeriod=30 Mar 21 04:20:52 crc kubenswrapper[4923]: I0321 04:20:52.797689 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v7tdq" Mar 21 04:20:52 crc kubenswrapper[4923]: I0321 04:20:52.822634 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v7tdq"] Mar 21 04:20:52 crc kubenswrapper[4923]: I0321 04:20:52.827888 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v7tdq"] Mar 21 04:20:52 crc kubenswrapper[4923]: I0321 04:20:52.862359 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz"] Mar 21 04:20:52 crc kubenswrapper[4923]: I0321 04:20:52.862546 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz" podUID="84e0e190-a5d3-4693-8876-815ea336a40c" containerName="route-controller-manager" containerID="cri-o://fcb0a6b08c35289fe51e0ae3ce6329d2a40c428e128f7069a5be4b83db883b4a" gracePeriod=30 Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.369854 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.523471 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e0e190-a5d3-4693-8876-815ea336a40c-config\") pod \"84e0e190-a5d3-4693-8876-815ea336a40c\" (UID: \"84e0e190-a5d3-4693-8876-815ea336a40c\") " Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.523506 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84e0e190-a5d3-4693-8876-815ea336a40c-client-ca\") pod \"84e0e190-a5d3-4693-8876-815ea336a40c\" (UID: \"84e0e190-a5d3-4693-8876-815ea336a40c\") " Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.523532 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84e0e190-a5d3-4693-8876-815ea336a40c-serving-cert\") pod \"84e0e190-a5d3-4693-8876-815ea336a40c\" (UID: \"84e0e190-a5d3-4693-8876-815ea336a40c\") " Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.523570 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99wgn\" (UniqueName: \"kubernetes.io/projected/84e0e190-a5d3-4693-8876-815ea336a40c-kube-api-access-99wgn\") pod \"84e0e190-a5d3-4693-8876-815ea336a40c\" (UID: \"84e0e190-a5d3-4693-8876-815ea336a40c\") " Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.524177 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e0e190-a5d3-4693-8876-815ea336a40c-client-ca" (OuterVolumeSpecName: "client-ca") pod "84e0e190-a5d3-4693-8876-815ea336a40c" (UID: "84e0e190-a5d3-4693-8876-815ea336a40c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.524191 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e0e190-a5d3-4693-8876-815ea336a40c-config" (OuterVolumeSpecName: "config") pod "84e0e190-a5d3-4693-8876-815ea336a40c" (UID: "84e0e190-a5d3-4693-8876-815ea336a40c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.527098 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e0e190-a5d3-4693-8876-815ea336a40c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "84e0e190-a5d3-4693-8876-815ea336a40c" (UID: "84e0e190-a5d3-4693-8876-815ea336a40c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.527174 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84e0e190-a5d3-4693-8876-815ea336a40c-kube-api-access-99wgn" (OuterVolumeSpecName: "kube-api-access-99wgn") pod "84e0e190-a5d3-4693-8876-815ea336a40c" (UID: "84e0e190-a5d3-4693-8876-815ea336a40c"). InnerVolumeSpecName "kube-api-access-99wgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.625146 4923 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e0e190-a5d3-4693-8876-815ea336a40c-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.625186 4923 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84e0e190-a5d3-4693-8876-815ea336a40c-client-ca\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.625201 4923 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84e0e190-a5d3-4693-8876-815ea336a40c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.625214 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99wgn\" (UniqueName: \"kubernetes.io/projected/84e0e190-a5d3-4693-8876-815ea336a40c-kube-api-access-99wgn\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.641536 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.780546 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84bddf74b8-qhfnk" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.812357 4923 generic.go:334] "Generic (PLEG): container finished" podID="84e0e190-a5d3-4693-8876-815ea336a40c" containerID="fcb0a6b08c35289fe51e0ae3ce6329d2a40c428e128f7069a5be4b83db883b4a" exitCode=0 Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.812400 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz" event={"ID":"84e0e190-a5d3-4693-8876-815ea336a40c","Type":"ContainerDied","Data":"fcb0a6b08c35289fe51e0ae3ce6329d2a40c428e128f7069a5be4b83db883b4a"} Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.812431 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz" event={"ID":"84e0e190-a5d3-4693-8876-815ea336a40c","Type":"ContainerDied","Data":"2442a8d11bab1fe11b4a553ee678ae55a9e40b49dfdba3e6b752e93ad9147f87"} Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.812383 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.812448 4923 scope.go:117] "RemoveContainer" containerID="fcb0a6b08c35289fe51e0ae3ce6329d2a40c428e128f7069a5be4b83db883b4a" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.814574 4923 generic.go:334] "Generic (PLEG): container finished" podID="affe5b9e-4cfb-451b-9cac-abe1a00d0f00" containerID="08b4240bc379c4c82a86a447e0f2b23babccefd848128c410f3d9c0fd402261f" exitCode=0 Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.814600 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84bddf74b8-qhfnk" event={"ID":"affe5b9e-4cfb-451b-9cac-abe1a00d0f00","Type":"ContainerDied","Data":"08b4240bc379c4c82a86a447e0f2b23babccefd848128c410f3d9c0fd402261f"} Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.814621 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84bddf74b8-qhfnk" event={"ID":"affe5b9e-4cfb-451b-9cac-abe1a00d0f00","Type":"ContainerDied","Data":"8979b694bd38a2c063283c9dab438d75e3f23391685710974c483bd38d276de7"} Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.814705 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84bddf74b8-qhfnk" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.840958 4923 scope.go:117] "RemoveContainer" containerID="fcb0a6b08c35289fe51e0ae3ce6329d2a40c428e128f7069a5be4b83db883b4a" Mar 21 04:20:53 crc kubenswrapper[4923]: E0321 04:20:53.841581 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcb0a6b08c35289fe51e0ae3ce6329d2a40c428e128f7069a5be4b83db883b4a\": container with ID starting with fcb0a6b08c35289fe51e0ae3ce6329d2a40c428e128f7069a5be4b83db883b4a not found: ID does not exist" containerID="fcb0a6b08c35289fe51e0ae3ce6329d2a40c428e128f7069a5be4b83db883b4a" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.841610 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcb0a6b08c35289fe51e0ae3ce6329d2a40c428e128f7069a5be4b83db883b4a"} err="failed to get container status \"fcb0a6b08c35289fe51e0ae3ce6329d2a40c428e128f7069a5be4b83db883b4a\": rpc error: code = NotFound desc = could not find container \"fcb0a6b08c35289fe51e0ae3ce6329d2a40c428e128f7069a5be4b83db883b4a\": container with ID starting with fcb0a6b08c35289fe51e0ae3ce6329d2a40c428e128f7069a5be4b83db883b4a not found: ID does not exist" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.841634 4923 scope.go:117] "RemoveContainer" containerID="08b4240bc379c4c82a86a447e0f2b23babccefd848128c410f3d9c0fd402261f" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.861787 4923 scope.go:117] "RemoveContainer" containerID="08b4240bc379c4c82a86a447e0f2b23babccefd848128c410f3d9c0fd402261f" Mar 21 04:20:53 crc kubenswrapper[4923]: E0321 04:20:53.862175 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08b4240bc379c4c82a86a447e0f2b23babccefd848128c410f3d9c0fd402261f\": container with ID starting with 08b4240bc379c4c82a86a447e0f2b23babccefd848128c410f3d9c0fd402261f not found: ID does not exist" containerID="08b4240bc379c4c82a86a447e0f2b23babccefd848128c410f3d9c0fd402261f" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.862204 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08b4240bc379c4c82a86a447e0f2b23babccefd848128c410f3d9c0fd402261f"} err="failed to get container status \"08b4240bc379c4c82a86a447e0f2b23babccefd848128c410f3d9c0fd402261f\": rpc error: code = NotFound desc = could not find container \"08b4240bc379c4c82a86a447e0f2b23babccefd848128c410f3d9c0fd402261f\": container with ID starting with 08b4240bc379c4c82a86a447e0f2b23babccefd848128c410f3d9c0fd402261f not found: ID does not exist" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.862603 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz"] Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.868383 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6799589d9f-jw8hz"] Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.928638 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66rqp\" (UniqueName: \"kubernetes.io/projected/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-kube-api-access-66rqp\") pod \"affe5b9e-4cfb-451b-9cac-abe1a00d0f00\" (UID: \"affe5b9e-4cfb-451b-9cac-abe1a00d0f00\") " Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.928744 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-config\") pod \"affe5b9e-4cfb-451b-9cac-abe1a00d0f00\" (UID: \"affe5b9e-4cfb-451b-9cac-abe1a00d0f00\") " Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.928776 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-serving-cert\") pod \"affe5b9e-4cfb-451b-9cac-abe1a00d0f00\" (UID: \"affe5b9e-4cfb-451b-9cac-abe1a00d0f00\") " Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.928858 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-proxy-ca-bundles\") pod \"affe5b9e-4cfb-451b-9cac-abe1a00d0f00\" (UID: \"affe5b9e-4cfb-451b-9cac-abe1a00d0f00\") " Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.928921 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-client-ca\") pod \"affe5b9e-4cfb-451b-9cac-abe1a00d0f00\" (UID: \"affe5b9e-4cfb-451b-9cac-abe1a00d0f00\") " Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.930000 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-client-ca" (OuterVolumeSpecName: "client-ca") pod "affe5b9e-4cfb-451b-9cac-abe1a00d0f00" (UID: "affe5b9e-4cfb-451b-9cac-abe1a00d0f00"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.930013 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "affe5b9e-4cfb-451b-9cac-abe1a00d0f00" (UID: "affe5b9e-4cfb-451b-9cac-abe1a00d0f00"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.930541 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-config" (OuterVolumeSpecName: "config") pod "affe5b9e-4cfb-451b-9cac-abe1a00d0f00" (UID: "affe5b9e-4cfb-451b-9cac-abe1a00d0f00"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.934880 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7899f58997-f8wzq"] Mar 21 04:20:53 crc kubenswrapper[4923]: E0321 04:20:53.935378 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44513839-d352-4720-a552-bc11c6030391" containerName="extract-utilities" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.935396 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="44513839-d352-4720-a552-bc11c6030391" containerName="extract-utilities" Mar 21 04:20:53 crc kubenswrapper[4923]: E0321 04:20:53.935410 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="affe5b9e-4cfb-451b-9cac-abe1a00d0f00" containerName="controller-manager" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.935419 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="affe5b9e-4cfb-451b-9cac-abe1a00d0f00" containerName="controller-manager" Mar 21 04:20:53 crc kubenswrapper[4923]: E0321 04:20:53.935430 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f7800b-eaa8-45bd-95b4-ee4885cadf52" containerName="extract-utilities" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.935437 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f7800b-eaa8-45bd-95b4-ee4885cadf52" containerName="extract-utilities" Mar 21 04:20:53 crc kubenswrapper[4923]: E0321 04:20:53.935449 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e9b7e6-79e7-47f0-a488-182df8bb166e" containerName="registry-server" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.935456 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e9b7e6-79e7-47f0-a488-182df8bb166e" containerName="registry-server" Mar 21 04:20:53 crc kubenswrapper[4923]: E0321 04:20:53.935469 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e0e190-a5d3-4693-8876-815ea336a40c" containerName="route-controller-manager" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.935476 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e0e190-a5d3-4693-8876-815ea336a40c" containerName="route-controller-manager" Mar 21 04:20:53 crc kubenswrapper[4923]: E0321 04:20:53.935485 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f7800b-eaa8-45bd-95b4-ee4885cadf52" containerName="registry-server" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.935492 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f7800b-eaa8-45bd-95b4-ee4885cadf52" containerName="registry-server" Mar 21 04:20:53 crc kubenswrapper[4923]: E0321 04:20:53.935503 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44513839-d352-4720-a552-bc11c6030391" containerName="extract-content" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.935511 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="44513839-d352-4720-a552-bc11c6030391" containerName="extract-content" Mar 21 04:20:53 crc kubenswrapper[4923]: E0321 04:20:53.935527 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f7800b-eaa8-45bd-95b4-ee4885cadf52" containerName="extract-content" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.935534 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f7800b-eaa8-45bd-95b4-ee4885cadf52" containerName="extract-content" Mar 21 04:20:53 crc kubenswrapper[4923]: E0321 04:20:53.935544 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e9b7e6-79e7-47f0-a488-182df8bb166e" containerName="extract-content" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.935551 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e9b7e6-79e7-47f0-a488-182df8bb166e" containerName="extract-content" Mar 21 04:20:53 crc kubenswrapper[4923]: E0321 04:20:53.935563 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44513839-d352-4720-a552-bc11c6030391" containerName="registry-server" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.935718 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "affe5b9e-4cfb-451b-9cac-abe1a00d0f00" (UID: "affe5b9e-4cfb-451b-9cac-abe1a00d0f00"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.935751 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="44513839-d352-4720-a552-bc11c6030391" containerName="registry-server" Mar 21 04:20:53 crc kubenswrapper[4923]: E0321 04:20:53.935819 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e9b7e6-79e7-47f0-a488-182df8bb166e" containerName="extract-utilities" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.935868 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e9b7e6-79e7-47f0-a488-182df8bb166e" containerName="extract-utilities" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.936129 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0e9b7e6-79e7-47f0-a488-182df8bb166e" containerName="registry-server" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.936146 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="44513839-d352-4720-a552-bc11c6030391" containerName="registry-server" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.936153 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="84e0e190-a5d3-4693-8876-815ea336a40c" containerName="route-controller-manager" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.936170 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="affe5b9e-4cfb-451b-9cac-abe1a00d0f00" containerName="controller-manager" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.936182 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="85f7800b-eaa8-45bd-95b4-ee4885cadf52" containerName="registry-server" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.936570 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb"] Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.937615 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.939493 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-kube-api-access-66rqp" (OuterVolumeSpecName: "kube-api-access-66rqp") pod "affe5b9e-4cfb-451b-9cac-abe1a00d0f00" (UID: "affe5b9e-4cfb-451b-9cac-abe1a00d0f00"). InnerVolumeSpecName "kube-api-access-66rqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.939796 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.941299 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb"] Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.941688 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.942264 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.942562 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.944263 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.945097 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.945474 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 21 04:20:53 crc kubenswrapper[4923]: I0321 04:20:53.946627 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7899f58997-f8wzq"] Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.030159 4923 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-client-ca\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.030197 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66rqp\" (UniqueName: \"kubernetes.io/projected/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-kube-api-access-66rqp\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.030208 4923 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.030217 4923 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.030225 4923 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/affe5b9e-4cfb-451b-9cac-abe1a00d0f00-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.131268 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zh8s\" (UniqueName: \"kubernetes.io/projected/3c7ee4e5-1e01-4fba-9ddc-53e671d130bb-kube-api-access-2zh8s\") pod \"route-controller-manager-6bf6d879dc-rttfb\" (UID: \"3c7ee4e5-1e01-4fba-9ddc-53e671d130bb\") " pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.131419 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t9zm\" (UniqueName: \"kubernetes.io/projected/9b9ee07c-4061-4cd2-80e7-c70d2c80577e-kube-api-access-6t9zm\") pod \"controller-manager-7899f58997-f8wzq\" (UID: \"9b9ee07c-4061-4cd2-80e7-c70d2c80577e\") " pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.131486 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c7ee4e5-1e01-4fba-9ddc-53e671d130bb-client-ca\") pod \"route-controller-manager-6bf6d879dc-rttfb\" (UID: \"3c7ee4e5-1e01-4fba-9ddc-53e671d130bb\") " pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.131577 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b9ee07c-4061-4cd2-80e7-c70d2c80577e-serving-cert\") pod \"controller-manager-7899f58997-f8wzq\" (UID: \"9b9ee07c-4061-4cd2-80e7-c70d2c80577e\") " pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.131616 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b9ee07c-4061-4cd2-80e7-c70d2c80577e-config\") pod \"controller-manager-7899f58997-f8wzq\" (UID: \"9b9ee07c-4061-4cd2-80e7-c70d2c80577e\") " pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.131666 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c7ee4e5-1e01-4fba-9ddc-53e671d130bb-serving-cert\") pod \"route-controller-manager-6bf6d879dc-rttfb\" (UID: \"3c7ee4e5-1e01-4fba-9ddc-53e671d130bb\") " pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.131733 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b9ee07c-4061-4cd2-80e7-c70d2c80577e-proxy-ca-bundles\") pod \"controller-manager-7899f58997-f8wzq\" (UID: \"9b9ee07c-4061-4cd2-80e7-c70d2c80577e\") " pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.131810 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b9ee07c-4061-4cd2-80e7-c70d2c80577e-client-ca\") pod \"controller-manager-7899f58997-f8wzq\" (UID: \"9b9ee07c-4061-4cd2-80e7-c70d2c80577e\") " pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.131852 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c7ee4e5-1e01-4fba-9ddc-53e671d130bb-config\") pod \"route-controller-manager-6bf6d879dc-rttfb\" (UID: \"3c7ee4e5-1e01-4fba-9ddc-53e671d130bb\") " pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.156609 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84bddf74b8-qhfnk"] Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.159428 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-84bddf74b8-qhfnk"] Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.233536 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b9ee07c-4061-4cd2-80e7-c70d2c80577e-serving-cert\") pod \"controller-manager-7899f58997-f8wzq\" (UID: \"9b9ee07c-4061-4cd2-80e7-c70d2c80577e\") " pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.233607 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b9ee07c-4061-4cd2-80e7-c70d2c80577e-config\") pod \"controller-manager-7899f58997-f8wzq\" (UID: \"9b9ee07c-4061-4cd2-80e7-c70d2c80577e\") " pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.233666 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c7ee4e5-1e01-4fba-9ddc-53e671d130bb-serving-cert\") pod \"route-controller-manager-6bf6d879dc-rttfb\" (UID: \"3c7ee4e5-1e01-4fba-9ddc-53e671d130bb\") " pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.233725 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b9ee07c-4061-4cd2-80e7-c70d2c80577e-proxy-ca-bundles\") pod \"controller-manager-7899f58997-f8wzq\" (UID: \"9b9ee07c-4061-4cd2-80e7-c70d2c80577e\") " pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.233761 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c7ee4e5-1e01-4fba-9ddc-53e671d130bb-config\") pod \"route-controller-manager-6bf6d879dc-rttfb\" (UID: \"3c7ee4e5-1e01-4fba-9ddc-53e671d130bb\") " pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.233793 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b9ee07c-4061-4cd2-80e7-c70d2c80577e-client-ca\") pod \"controller-manager-7899f58997-f8wzq\" (UID: \"9b9ee07c-4061-4cd2-80e7-c70d2c80577e\") " pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.233848 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zh8s\" (UniqueName: \"kubernetes.io/projected/3c7ee4e5-1e01-4fba-9ddc-53e671d130bb-kube-api-access-2zh8s\") pod \"route-controller-manager-6bf6d879dc-rttfb\" (UID: \"3c7ee4e5-1e01-4fba-9ddc-53e671d130bb\") " pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.233920 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t9zm\" (UniqueName: \"kubernetes.io/projected/9b9ee07c-4061-4cd2-80e7-c70d2c80577e-kube-api-access-6t9zm\") pod \"controller-manager-7899f58997-f8wzq\" (UID: \"9b9ee07c-4061-4cd2-80e7-c70d2c80577e\") " pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.233958 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c7ee4e5-1e01-4fba-9ddc-53e671d130bb-client-ca\") pod \"route-controller-manager-6bf6d879dc-rttfb\" (UID: \"3c7ee4e5-1e01-4fba-9ddc-53e671d130bb\") " pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.235187 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b9ee07c-4061-4cd2-80e7-c70d2c80577e-config\") pod \"controller-manager-7899f58997-f8wzq\" (UID: \"9b9ee07c-4061-4cd2-80e7-c70d2c80577e\") " pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.235633 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c7ee4e5-1e01-4fba-9ddc-53e671d130bb-config\") pod \"route-controller-manager-6bf6d879dc-rttfb\" (UID: \"3c7ee4e5-1e01-4fba-9ddc-53e671d130bb\") " pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.235673 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c7ee4e5-1e01-4fba-9ddc-53e671d130bb-client-ca\") pod \"route-controller-manager-6bf6d879dc-rttfb\" (UID: \"3c7ee4e5-1e01-4fba-9ddc-53e671d130bb\") " pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.236108 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9b9ee07c-4061-4cd2-80e7-c70d2c80577e-proxy-ca-bundles\") pod \"controller-manager-7899f58997-f8wzq\" (UID: \"9b9ee07c-4061-4cd2-80e7-c70d2c80577e\") " pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.238216 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9b9ee07c-4061-4cd2-80e7-c70d2c80577e-client-ca\") pod \"controller-manager-7899f58997-f8wzq\" (UID: \"9b9ee07c-4061-4cd2-80e7-c70d2c80577e\") " pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.240678 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b9ee07c-4061-4cd2-80e7-c70d2c80577e-serving-cert\") pod \"controller-manager-7899f58997-f8wzq\" (UID: \"9b9ee07c-4061-4cd2-80e7-c70d2c80577e\") " pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.242930 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c7ee4e5-1e01-4fba-9ddc-53e671d130bb-serving-cert\") pod \"route-controller-manager-6bf6d879dc-rttfb\" (UID: \"3c7ee4e5-1e01-4fba-9ddc-53e671d130bb\") " pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.249928 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zh8s\" (UniqueName: \"kubernetes.io/projected/3c7ee4e5-1e01-4fba-9ddc-53e671d130bb-kube-api-access-2zh8s\") pod \"route-controller-manager-6bf6d879dc-rttfb\" (UID: \"3c7ee4e5-1e01-4fba-9ddc-53e671d130bb\") " pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.253631 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t9zm\" (UniqueName: \"kubernetes.io/projected/9b9ee07c-4061-4cd2-80e7-c70d2c80577e-kube-api-access-6t9zm\") pod \"controller-manager-7899f58997-f8wzq\" (UID: \"9b9ee07c-4061-4cd2-80e7-c70d2c80577e\") " pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.266462 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.283176 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.319479 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9grdl" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.319530 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9grdl" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.365300 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44513839-d352-4720-a552-bc11c6030391" path="/var/lib/kubelet/pods/44513839-d352-4720-a552-bc11c6030391/volumes" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.366398 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84e0e190-a5d3-4693-8876-815ea336a40c" path="/var/lib/kubelet/pods/84e0e190-a5d3-4693-8876-815ea336a40c/volumes" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.367149 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="affe5b9e-4cfb-451b-9cac-abe1a00d0f00" path="/var/lib/kubelet/pods/affe5b9e-4cfb-451b-9cac-abe1a00d0f00/volumes" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.368437 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9grdl" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.528879 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m57bq" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.528937 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m57bq" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.577172 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m57bq" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.735844 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb"] Mar 21 04:20:54 crc kubenswrapper[4923]: W0321 04:20:54.745707 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c7ee4e5_1e01_4fba_9ddc_53e671d130bb.slice/crio-8d3260a4e8a001dc52761c5d4c4b7bd8eded5653b24276cb9965e7d2c6d74460 WatchSource:0}: Error finding container 8d3260a4e8a001dc52761c5d4c4b7bd8eded5653b24276cb9965e7d2c6d74460: Status 404 returned error can't find the container with id 8d3260a4e8a001dc52761c5d4c4b7bd8eded5653b24276cb9965e7d2c6d74460 Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.771978 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7899f58997-f8wzq"] Mar 21 04:20:54 crc kubenswrapper[4923]: W0321 04:20:54.776377 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b9ee07c_4061_4cd2_80e7_c70d2c80577e.slice/crio-b030384ef3c58c344977c0522cb09d7e49663196d472ed0131aa6f65f635c0c8 WatchSource:0}: Error finding container b030384ef3c58c344977c0522cb09d7e49663196d472ed0131aa6f65f635c0c8: Status 404 returned error can't find the container with id b030384ef3c58c344977c0522cb09d7e49663196d472ed0131aa6f65f635c0c8 Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.820581 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" event={"ID":"3c7ee4e5-1e01-4fba-9ddc-53e671d130bb","Type":"ContainerStarted","Data":"8d3260a4e8a001dc52761c5d4c4b7bd8eded5653b24276cb9965e7d2c6d74460"} Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.821985 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" event={"ID":"9b9ee07c-4061-4cd2-80e7-c70d2c80577e","Type":"ContainerStarted","Data":"b030384ef3c58c344977c0522cb09d7e49663196d472ed0131aa6f65f635c0c8"} Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.879773 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9grdl" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.891722 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m57bq" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.966078 4923 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.966875 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.967475 4923 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.967872 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://ddf3580c0287adb92a6be49eb1851df029fb50540434a0481e695f818f1a39de" gracePeriod=15 Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.967950 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://e95af86be0d3a51c05ab8ddbf8e9be317ea430fcc6b7efb8dac90dfca843deec" gracePeriod=15 Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.968038 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://7e55413df017e33bb843603b26a5b32281456dc52c62f71753c178a8525d6c97" gracePeriod=15 Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.968041 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://860ca01d512db7ed01ca9aac8b52452665cee86f8d0708c5793071548f0ea734" gracePeriod=15 Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.967882 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://2b50fd31a4dc122bfbc5e25fa58276c94141fdadca20ae5c2c2a4a6b8926ef95" gracePeriod=15 Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.969237 4923 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 21 04:20:54 crc kubenswrapper[4923]: E0321 04:20:54.969520 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.969538 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 21 04:20:54 crc kubenswrapper[4923]: E0321 04:20:54.969557 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.969566 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 21 04:20:54 crc kubenswrapper[4923]: E0321 04:20:54.969579 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.969587 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 21 04:20:54 crc kubenswrapper[4923]: E0321 04:20:54.969599 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.969608 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 04:20:54 crc kubenswrapper[4923]: E0321 04:20:54.969620 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.969629 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 21 04:20:54 crc kubenswrapper[4923]: E0321 04:20:54.969641 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.969650 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 04:20:54 crc kubenswrapper[4923]: E0321 04:20:54.969659 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.969667 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 21 04:20:54 crc kubenswrapper[4923]: E0321 04:20:54.969684 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.969692 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 04:20:54 crc kubenswrapper[4923]: E0321 04:20:54.969703 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.969711 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.973420 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.973451 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.973464 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.973479 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.973492 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.973506 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.973525 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 04:20:54 crc kubenswrapper[4923]: E0321 04:20:54.973722 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.973739 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.973879 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 04:20:54 crc kubenswrapper[4923]: I0321 04:20:54.974230 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.043806 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.145510 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.145819 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.145848 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.145879 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.145898 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.145916 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.145945 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.145967 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.246616 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.246668 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.246709 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.246732 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.246803 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.246809 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.246749 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.246870 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.246881 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.246921 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.246969 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.247017 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.247083 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.247132 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.247202 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.247173 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.339718 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 04:20:55 crc kubenswrapper[4923]: W0321 04:20:55.369745 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-e6f67b42a171a04ac9c0efe28cba1a68d8e076ece98fa6d0b86f3e6d75939260 WatchSource:0}: Error finding container e6f67b42a171a04ac9c0efe28cba1a68d8e076ece98fa6d0b86f3e6d75939260: Status 404 returned error can't find the container with id e6f67b42a171a04ac9c0efe28cba1a68d8e076ece98fa6d0b86f3e6d75939260 Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.833046 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" event={"ID":"3c7ee4e5-1e01-4fba-9ddc-53e671d130bb","Type":"ContainerStarted","Data":"d60c28096ba606744e99de672da6c349b48e5c58334e040b97a87e1ef9f03867"} Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.833395 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.835643 4923 generic.go:334] "Generic (PLEG): container finished" podID="3d694cb6-70ab-4c6b-98ee-9aa819980356" containerID="3defa9f1db1c3bb755d3dde0265c18ddf343df06466e34c33abe79e777f246d7" exitCode=0 Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.835724 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3d694cb6-70ab-4c6b-98ee-9aa819980356","Type":"ContainerDied","Data":"3defa9f1db1c3bb755d3dde0265c18ddf343df06466e34c33abe79e777f246d7"} Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.835873 4923 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.836445 4923 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.836954 4923 status_manager.go:851] "Failed to get status for pod" podUID="3c7ee4e5-1e01-4fba-9ddc-53e671d130bb" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6bf6d879dc-rttfb\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.837506 4923 status_manager.go:851] "Failed to get status for pod" podUID="3c7ee4e5-1e01-4fba-9ddc-53e671d130bb" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6bf6d879dc-rttfb\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.837802 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" event={"ID":"9b9ee07c-4061-4cd2-80e7-c70d2c80577e","Type":"ContainerStarted","Data":"4c17f106b5e532d186b45c35b27392bafc1d8b0c23eaeaf73246c69b60339dca"} Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.838223 4923 status_manager.go:851] "Failed to get status for pod" podUID="3d694cb6-70ab-4c6b-98ee-9aa819980356" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.839071 4923 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.839761 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"0bb5a00c56f27497688f44231653e776caa9673c3d8a657c36dc3a5f06867724"} Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.839883 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e6f67b42a171a04ac9c0efe28cba1a68d8e076ece98fa6d0b86f3e6d75939260"} Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.839782 4923 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.840439 4923 status_manager.go:851] "Failed to get status for pod" podUID="9b9ee07c-4061-4cd2-80e7-c70d2c80577e" pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7899f58997-f8wzq\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.841017 4923 status_manager.go:851] "Failed to get status for pod" podUID="3d694cb6-70ab-4c6b-98ee-9aa819980356" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.841560 4923 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.842055 4923 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.842592 4923 status_manager.go:851] "Failed to get status for pod" podUID="3c7ee4e5-1e01-4fba-9ddc-53e671d130bb" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6bf6d879dc-rttfb\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.842751 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.844726 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.845631 4923 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2b50fd31a4dc122bfbc5e25fa58276c94141fdadca20ae5c2c2a4a6b8926ef95" exitCode=0 Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.845665 4923 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7e55413df017e33bb843603b26a5b32281456dc52c62f71753c178a8525d6c97" exitCode=0 Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.845684 4923 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e95af86be0d3a51c05ab8ddbf8e9be317ea430fcc6b7efb8dac90dfca843deec" exitCode=0 Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.845688 4923 scope.go:117] "RemoveContainer" containerID="255966e1808c105be31fb4f94cbc0d42750c3eed0dbb11cc0da41840834dbdec" Mar 21 04:20:55 crc kubenswrapper[4923]: I0321 04:20:55.845701 4923 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="860ca01d512db7ed01ca9aac8b52452665cee86f8d0708c5793071548f0ea734" exitCode=2 Mar 21 04:20:56 crc kubenswrapper[4923]: I0321 04:20:56.141828 4923 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Mar 21 04:20:56 crc kubenswrapper[4923]: I0321 04:20:56.141903 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Mar 21 04:20:56 crc kubenswrapper[4923]: E0321 04:20:56.360440 4923 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:56 crc kubenswrapper[4923]: E0321 04:20:56.361207 4923 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:56 crc kubenswrapper[4923]: E0321 04:20:56.362197 4923 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:56 crc kubenswrapper[4923]: E0321 04:20:56.362768 4923 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:56 crc kubenswrapper[4923]: E0321 04:20:56.363297 4923 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:56 crc kubenswrapper[4923]: I0321 04:20:56.363388 4923 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 21 04:20:56 crc kubenswrapper[4923]: E0321 04:20:56.364988 4923 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="200ms" Mar 21 04:20:56 crc kubenswrapper[4923]: I0321 04:20:56.369142 4923 status_manager.go:851] "Failed to get status for pod" podUID="9b9ee07c-4061-4cd2-80e7-c70d2c80577e" pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7899f58997-f8wzq\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:56 crc kubenswrapper[4923]: I0321 04:20:56.369693 4923 status_manager.go:851] "Failed to get status for pod" podUID="3d694cb6-70ab-4c6b-98ee-9aa819980356" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:56 crc kubenswrapper[4923]: I0321 04:20:56.370178 4923 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:56 crc kubenswrapper[4923]: I0321 04:20:56.370833 4923 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:56 crc kubenswrapper[4923]: I0321 04:20:56.371484 4923 status_manager.go:851] "Failed to get status for pod" podUID="3c7ee4e5-1e01-4fba-9ddc-53e671d130bb" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6bf6d879dc-rttfb\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:56 crc kubenswrapper[4923]: E0321 04:20:56.566244 4923 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="400ms" Mar 21 04:20:56 crc kubenswrapper[4923]: I0321 04:20:56.833865 4923 patch_prober.go:28] interesting pod/route-controller-manager-6bf6d879dc-rttfb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 21 04:20:56 crc kubenswrapper[4923]: I0321 04:20:56.833966 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" podUID="3c7ee4e5-1e01-4fba-9ddc-53e671d130bb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 21 04:20:56 crc kubenswrapper[4923]: I0321 04:20:56.858486 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 21 04:20:56 crc kubenswrapper[4923]: I0321 04:20:56.860134 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" Mar 21 04:20:56 crc kubenswrapper[4923]: I0321 04:20:56.872261 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" Mar 21 04:20:56 crc kubenswrapper[4923]: I0321 04:20:56.872837 4923 status_manager.go:851] "Failed to get status for pod" podUID="9b9ee07c-4061-4cd2-80e7-c70d2c80577e" pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7899f58997-f8wzq\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:56 crc kubenswrapper[4923]: I0321 04:20:56.873277 4923 status_manager.go:851] "Failed to get status for pod" podUID="3d694cb6-70ab-4c6b-98ee-9aa819980356" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:56 crc kubenswrapper[4923]: I0321 04:20:56.873758 4923 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:56 crc kubenswrapper[4923]: I0321 04:20:56.874072 4923 status_manager.go:851] "Failed to get status for pod" podUID="3c7ee4e5-1e01-4fba-9ddc-53e671d130bb" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6bf6d879dc-rttfb\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:56 crc kubenswrapper[4923]: E0321 04:20:56.967307 4923 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="800ms" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.239478 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.240349 4923 status_manager.go:851] "Failed to get status for pod" podUID="3c7ee4e5-1e01-4fba-9ddc-53e671d130bb" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6bf6d879dc-rttfb\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.240743 4923 status_manager.go:851] "Failed to get status for pod" podUID="9b9ee07c-4061-4cd2-80e7-c70d2c80577e" pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7899f58997-f8wzq\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.240971 4923 status_manager.go:851] "Failed to get status for pod" podUID="3d694cb6-70ab-4c6b-98ee-9aa819980356" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.241195 4923 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.376672 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d694cb6-70ab-4c6b-98ee-9aa819980356-kubelet-dir\") pod \"3d694cb6-70ab-4c6b-98ee-9aa819980356\" (UID: \"3d694cb6-70ab-4c6b-98ee-9aa819980356\") " Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.376731 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d694cb6-70ab-4c6b-98ee-9aa819980356-kube-api-access\") pod \"3d694cb6-70ab-4c6b-98ee-9aa819980356\" (UID: \"3d694cb6-70ab-4c6b-98ee-9aa819980356\") " Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.376770 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d694cb6-70ab-4c6b-98ee-9aa819980356-var-lock\") pod \"3d694cb6-70ab-4c6b-98ee-9aa819980356\" (UID: \"3d694cb6-70ab-4c6b-98ee-9aa819980356\") " Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.377054 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d694cb6-70ab-4c6b-98ee-9aa819980356-var-lock" (OuterVolumeSpecName: "var-lock") pod "3d694cb6-70ab-4c6b-98ee-9aa819980356" (UID: "3d694cb6-70ab-4c6b-98ee-9aa819980356"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.377412 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d694cb6-70ab-4c6b-98ee-9aa819980356-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3d694cb6-70ab-4c6b-98ee-9aa819980356" (UID: "3d694cb6-70ab-4c6b-98ee-9aa819980356"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.383561 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d694cb6-70ab-4c6b-98ee-9aa819980356-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3d694cb6-70ab-4c6b-98ee-9aa819980356" (UID: "3d694cb6-70ab-4c6b-98ee-9aa819980356"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.413187 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.414241 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.414850 4923 status_manager.go:851] "Failed to get status for pod" podUID="3c7ee4e5-1e01-4fba-9ddc-53e671d130bb" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6bf6d879dc-rttfb\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.415461 4923 status_manager.go:851] "Failed to get status for pod" podUID="9b9ee07c-4061-4cd2-80e7-c70d2c80577e" pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7899f58997-f8wzq\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.415866 4923 status_manager.go:851] "Failed to get status for pod" podUID="3d694cb6-70ab-4c6b-98ee-9aa819980356" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.416312 4923 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.416758 4923 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.478433 4923 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d694cb6-70ab-4c6b-98ee-9aa819980356-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.478472 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d694cb6-70ab-4c6b-98ee-9aa819980356-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.478483 4923 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d694cb6-70ab-4c6b-98ee-9aa819980356-var-lock\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.579598 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.579696 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.579762 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.580301 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.580426 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.580426 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.682069 4923 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.682117 4923 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.682138 4923 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 21 04:20:57 crc kubenswrapper[4923]: E0321 04:20:57.768911 4923 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="1.6s" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.860516 4923 patch_prober.go:28] interesting pod/route-controller-manager-6bf6d879dc-rttfb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.860642 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" podUID="3c7ee4e5-1e01-4fba-9ddc-53e671d130bb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.875517 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3d694cb6-70ab-4c6b-98ee-9aa819980356","Type":"ContainerDied","Data":"b0a0b1f4ce5623e0647ee025df56fb9ac91147964b8dea3b2d312348d9a5a0b1"} Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.875580 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0a0b1f4ce5623e0647ee025df56fb9ac91147964b8dea3b2d312348d9a5a0b1" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.875535 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.879023 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.879995 4923 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ddf3580c0287adb92a6be49eb1851df029fb50540434a0481e695f818f1a39de" exitCode=0 Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.881184 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.881509 4923 scope.go:117] "RemoveContainer" containerID="2b50fd31a4dc122bfbc5e25fa58276c94141fdadca20ae5c2c2a4a6b8926ef95" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.895057 4923 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.895442 4923 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.895854 4923 status_manager.go:851] "Failed to get status for pod" podUID="3c7ee4e5-1e01-4fba-9ddc-53e671d130bb" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6bf6d879dc-rttfb\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.896202 4923 status_manager.go:851] "Failed to get status for pod" podUID="9b9ee07c-4061-4cd2-80e7-c70d2c80577e" pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7899f58997-f8wzq\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.896512 4923 status_manager.go:851] "Failed to get status for pod" podUID="3d694cb6-70ab-4c6b-98ee-9aa819980356" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.898155 4923 scope.go:117] "RemoveContainer" containerID="7e55413df017e33bb843603b26a5b32281456dc52c62f71753c178a8525d6c97" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.916053 4923 status_manager.go:851] "Failed to get status for pod" podUID="3d694cb6-70ab-4c6b-98ee-9aa819980356" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.918478 4923 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.918540 4923 scope.go:117] "RemoveContainer" containerID="e95af86be0d3a51c05ab8ddbf8e9be317ea430fcc6b7efb8dac90dfca843deec" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.918890 4923 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.919362 4923 status_manager.go:851] "Failed to get status for pod" podUID="3c7ee4e5-1e01-4fba-9ddc-53e671d130bb" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6bf6d879dc-rttfb\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.919752 4923 status_manager.go:851] "Failed to get status for pod" podUID="9b9ee07c-4061-4cd2-80e7-c70d2c80577e" pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7899f58997-f8wzq\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.933753 4923 scope.go:117] "RemoveContainer" containerID="860ca01d512db7ed01ca9aac8b52452665cee86f8d0708c5793071548f0ea734" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.950748 4923 scope.go:117] "RemoveContainer" containerID="ddf3580c0287adb92a6be49eb1851df029fb50540434a0481e695f818f1a39de" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.967946 4923 scope.go:117] "RemoveContainer" containerID="b5e8ebcf8706695d71fdfa822592c1bac2cd511db956c5cfa7096840933042bc" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.987924 4923 scope.go:117] "RemoveContainer" containerID="2b50fd31a4dc122bfbc5e25fa58276c94141fdadca20ae5c2c2a4a6b8926ef95" Mar 21 04:20:57 crc kubenswrapper[4923]: E0321 04:20:57.988362 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b50fd31a4dc122bfbc5e25fa58276c94141fdadca20ae5c2c2a4a6b8926ef95\": container with ID starting with 2b50fd31a4dc122bfbc5e25fa58276c94141fdadca20ae5c2c2a4a6b8926ef95 not found: ID does not exist" containerID="2b50fd31a4dc122bfbc5e25fa58276c94141fdadca20ae5c2c2a4a6b8926ef95" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.988400 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b50fd31a4dc122bfbc5e25fa58276c94141fdadca20ae5c2c2a4a6b8926ef95"} err="failed to get container status \"2b50fd31a4dc122bfbc5e25fa58276c94141fdadca20ae5c2c2a4a6b8926ef95\": rpc error: code = NotFound desc = could not find container \"2b50fd31a4dc122bfbc5e25fa58276c94141fdadca20ae5c2c2a4a6b8926ef95\": container with ID starting with 2b50fd31a4dc122bfbc5e25fa58276c94141fdadca20ae5c2c2a4a6b8926ef95 not found: ID does not exist" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.988426 4923 scope.go:117] "RemoveContainer" containerID="7e55413df017e33bb843603b26a5b32281456dc52c62f71753c178a8525d6c97" Mar 21 04:20:57 crc kubenswrapper[4923]: E0321 04:20:57.988752 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e55413df017e33bb843603b26a5b32281456dc52c62f71753c178a8525d6c97\": container with ID starting with 7e55413df017e33bb843603b26a5b32281456dc52c62f71753c178a8525d6c97 not found: ID does not exist" containerID="7e55413df017e33bb843603b26a5b32281456dc52c62f71753c178a8525d6c97" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.988802 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e55413df017e33bb843603b26a5b32281456dc52c62f71753c178a8525d6c97"} err="failed to get container status \"7e55413df017e33bb843603b26a5b32281456dc52c62f71753c178a8525d6c97\": rpc error: code = NotFound desc = could not find container \"7e55413df017e33bb843603b26a5b32281456dc52c62f71753c178a8525d6c97\": container with ID starting with 7e55413df017e33bb843603b26a5b32281456dc52c62f71753c178a8525d6c97 not found: ID does not exist" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.988834 4923 scope.go:117] "RemoveContainer" containerID="e95af86be0d3a51c05ab8ddbf8e9be317ea430fcc6b7efb8dac90dfca843deec" Mar 21 04:20:57 crc kubenswrapper[4923]: E0321 04:20:57.989111 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e95af86be0d3a51c05ab8ddbf8e9be317ea430fcc6b7efb8dac90dfca843deec\": container with ID starting with e95af86be0d3a51c05ab8ddbf8e9be317ea430fcc6b7efb8dac90dfca843deec not found: ID does not exist" containerID="e95af86be0d3a51c05ab8ddbf8e9be317ea430fcc6b7efb8dac90dfca843deec" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.989140 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e95af86be0d3a51c05ab8ddbf8e9be317ea430fcc6b7efb8dac90dfca843deec"} err="failed to get container status \"e95af86be0d3a51c05ab8ddbf8e9be317ea430fcc6b7efb8dac90dfca843deec\": rpc error: code = NotFound desc = could not find container \"e95af86be0d3a51c05ab8ddbf8e9be317ea430fcc6b7efb8dac90dfca843deec\": container with ID starting with e95af86be0d3a51c05ab8ddbf8e9be317ea430fcc6b7efb8dac90dfca843deec not found: ID does not exist" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.989159 4923 scope.go:117] "RemoveContainer" containerID="860ca01d512db7ed01ca9aac8b52452665cee86f8d0708c5793071548f0ea734" Mar 21 04:20:57 crc kubenswrapper[4923]: E0321 04:20:57.989700 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"860ca01d512db7ed01ca9aac8b52452665cee86f8d0708c5793071548f0ea734\": container with ID starting with 860ca01d512db7ed01ca9aac8b52452665cee86f8d0708c5793071548f0ea734 not found: ID does not exist" containerID="860ca01d512db7ed01ca9aac8b52452665cee86f8d0708c5793071548f0ea734" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.989725 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"860ca01d512db7ed01ca9aac8b52452665cee86f8d0708c5793071548f0ea734"} err="failed to get container status \"860ca01d512db7ed01ca9aac8b52452665cee86f8d0708c5793071548f0ea734\": rpc error: code = NotFound desc = could not find container \"860ca01d512db7ed01ca9aac8b52452665cee86f8d0708c5793071548f0ea734\": container with ID starting with 860ca01d512db7ed01ca9aac8b52452665cee86f8d0708c5793071548f0ea734 not found: ID does not exist" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.989742 4923 scope.go:117] "RemoveContainer" containerID="ddf3580c0287adb92a6be49eb1851df029fb50540434a0481e695f818f1a39de" Mar 21 04:20:57 crc kubenswrapper[4923]: E0321 04:20:57.989963 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddf3580c0287adb92a6be49eb1851df029fb50540434a0481e695f818f1a39de\": container with ID starting with ddf3580c0287adb92a6be49eb1851df029fb50540434a0481e695f818f1a39de not found: ID does not exist" containerID="ddf3580c0287adb92a6be49eb1851df029fb50540434a0481e695f818f1a39de" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.989996 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddf3580c0287adb92a6be49eb1851df029fb50540434a0481e695f818f1a39de"} err="failed to get container status \"ddf3580c0287adb92a6be49eb1851df029fb50540434a0481e695f818f1a39de\": rpc error: code = NotFound desc = could not find container \"ddf3580c0287adb92a6be49eb1851df029fb50540434a0481e695f818f1a39de\": container with ID starting with ddf3580c0287adb92a6be49eb1851df029fb50540434a0481e695f818f1a39de not found: ID does not exist" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.990017 4923 scope.go:117] "RemoveContainer" containerID="b5e8ebcf8706695d71fdfa822592c1bac2cd511db956c5cfa7096840933042bc" Mar 21 04:20:57 crc kubenswrapper[4923]: E0321 04:20:57.990283 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5e8ebcf8706695d71fdfa822592c1bac2cd511db956c5cfa7096840933042bc\": container with ID starting with b5e8ebcf8706695d71fdfa822592c1bac2cd511db956c5cfa7096840933042bc not found: ID does not exist" containerID="b5e8ebcf8706695d71fdfa822592c1bac2cd511db956c5cfa7096840933042bc" Mar 21 04:20:57 crc kubenswrapper[4923]: I0321 04:20:57.990311 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5e8ebcf8706695d71fdfa822592c1bac2cd511db956c5cfa7096840933042bc"} err="failed to get container status \"b5e8ebcf8706695d71fdfa822592c1bac2cd511db956c5cfa7096840933042bc\": rpc error: code = NotFound desc = could not find container \"b5e8ebcf8706695d71fdfa822592c1bac2cd511db956c5cfa7096840933042bc\": container with ID starting with b5e8ebcf8706695d71fdfa822592c1bac2cd511db956c5cfa7096840933042bc not found: ID does not exist" Mar 21 04:20:58 crc kubenswrapper[4923]: E0321 04:20:58.037859 4923 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4b3225e_19a4_40c0_b9fe_b93566ed64e9.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod877384d0_10ed_4fca_a600_e8fa69a85648.slice\": RecentStats: unable to find data in memory cache]" Mar 21 04:20:58 crc kubenswrapper[4923]: I0321 04:20:58.386103 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Mar 21 04:20:59 crc kubenswrapper[4923]: E0321 04:20:59.370487 4923 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="3.2s" Mar 21 04:21:00 crc kubenswrapper[4923]: E0321 04:21:00.055421 4923 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/events\": dial tcp 38.102.83.199:6443: connect: connection refused" event="&Event{ObjectMeta:{controller-manager-7899f58997-f8wzq.189ec067bf4a0d7e openshift-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager,Name:controller-manager-7899f58997-f8wzq,UID:9b9ee07c-4061-4cd2-80e7-c70d2c80577e,APIVersion:v1,ResourceVersion:29878,FieldPath:spec.containers{controller-manager},},Reason:Created,Message:Created container controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:20:55.054019966 +0000 UTC m=+220.207031073,LastTimestamp:2026-03-21 04:20:55.054019966 +0000 UTC m=+220.207031073,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:21:01 crc kubenswrapper[4923]: E0321 04:21:01.506370 4923 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/events\": dial tcp 38.102.83.199:6443: connect: connection refused" event="&Event{ObjectMeta:{controller-manager-7899f58997-f8wzq.189ec067bf4a0d7e openshift-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager,Name:controller-manager-7899f58997-f8wzq,UID:9b9ee07c-4061-4cd2-80e7-c70d2c80577e,APIVersion:v1,ResourceVersion:29878,FieldPath:spec.containers{controller-manager},},Reason:Created,Message:Created container controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-21 04:20:55.054019966 +0000 UTC m=+220.207031073,LastTimestamp:2026-03-21 04:20:55.054019966 +0000 UTC m=+220.207031073,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 21 04:21:02 crc kubenswrapper[4923]: E0321 04:21:02.571976 4923 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="6.4s" Mar 21 04:21:03 crc kubenswrapper[4923]: I0321 04:21:03.235831 4923 patch_prober.go:28] interesting pod/machine-config-daemon-cv5gr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 04:21:03 crc kubenswrapper[4923]: I0321 04:21:03.235944 4923 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 04:21:05 crc kubenswrapper[4923]: I0321 04:21:05.284439 4923 patch_prober.go:28] interesting pod/route-controller-manager-6bf6d879dc-rttfb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 21 04:21:05 crc kubenswrapper[4923]: I0321 04:21:05.284866 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" podUID="3c7ee4e5-1e01-4fba-9ddc-53e671d130bb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 21 04:21:06 crc kubenswrapper[4923]: I0321 04:21:06.363384 4923 status_manager.go:851] "Failed to get status for pod" podUID="3c7ee4e5-1e01-4fba-9ddc-53e671d130bb" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6bf6d879dc-rttfb\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:21:06 crc kubenswrapper[4923]: I0321 04:21:06.363972 4923 status_manager.go:851] "Failed to get status for pod" podUID="9b9ee07c-4061-4cd2-80e7-c70d2c80577e" pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7899f58997-f8wzq\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:21:06 crc kubenswrapper[4923]: I0321 04:21:06.364510 4923 status_manager.go:851] "Failed to get status for pod" podUID="3d694cb6-70ab-4c6b-98ee-9aa819980356" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:21:06 crc kubenswrapper[4923]: I0321 04:21:06.364981 4923 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:21:07 crc kubenswrapper[4923]: E0321 04:21:07.051904 4923 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:21:07Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:21:07Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:21:07Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-21T04:21:07Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:21:07 crc kubenswrapper[4923]: E0321 04:21:07.052369 4923 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:21:07 crc kubenswrapper[4923]: E0321 04:21:07.053002 4923 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:21:07 crc kubenswrapper[4923]: E0321 04:21:07.053406 4923 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:21:07 crc kubenswrapper[4923]: E0321 04:21:07.053841 4923 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:21:07 crc kubenswrapper[4923]: E0321 04:21:07.053867 4923 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 21 04:21:08 crc kubenswrapper[4923]: E0321 04:21:08.174576 4923 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod877384d0_10ed_4fca_a600_e8fa69a85648.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4b3225e_19a4_40c0_b9fe_b93566ed64e9.slice\": RecentStats: unable to find data in memory cache]" Mar 21 04:21:08 crc kubenswrapper[4923]: I0321 04:21:08.357709 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:21:08 crc kubenswrapper[4923]: I0321 04:21:08.358920 4923 status_manager.go:851] "Failed to get status for pod" podUID="9b9ee07c-4061-4cd2-80e7-c70d2c80577e" pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7899f58997-f8wzq\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:21:08 crc kubenswrapper[4923]: I0321 04:21:08.359611 4923 status_manager.go:851] "Failed to get status for pod" podUID="3d694cb6-70ab-4c6b-98ee-9aa819980356" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:21:08 crc kubenswrapper[4923]: I0321 04:21:08.360069 4923 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:21:08 crc kubenswrapper[4923]: I0321 04:21:08.360695 4923 status_manager.go:851] "Failed to get status for pod" podUID="3c7ee4e5-1e01-4fba-9ddc-53e671d130bb" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6bf6d879dc-rttfb\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:21:08 crc kubenswrapper[4923]: I0321 04:21:08.382985 4923 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f63007f3-ef94-44f5-a6c4-bc456777eb89" Mar 21 04:21:08 crc kubenswrapper[4923]: I0321 04:21:08.383042 4923 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f63007f3-ef94-44f5-a6c4-bc456777eb89" Mar 21 04:21:08 crc kubenswrapper[4923]: E0321 04:21:08.383795 4923 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:21:08 crc kubenswrapper[4923]: I0321 04:21:08.384532 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:21:08 crc kubenswrapper[4923]: W0321 04:21:08.423822 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-712bf58009a1ac8e8469faec54bb71a8d7bdf8b3dfea089fde5e7a0786643415 WatchSource:0}: Error finding container 712bf58009a1ac8e8469faec54bb71a8d7bdf8b3dfea089fde5e7a0786643415: Status 404 returned error can't find the container with id 712bf58009a1ac8e8469faec54bb71a8d7bdf8b3dfea089fde5e7a0786643415 Mar 21 04:21:08 crc kubenswrapper[4923]: I0321 04:21:08.963027 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 21 04:21:08 crc kubenswrapper[4923]: I0321 04:21:08.966101 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 21 04:21:08 crc kubenswrapper[4923]: I0321 04:21:08.966184 4923 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="a3c917c628db6baeed175000981dac16ae873338ec2de4eff09c21980a58785e" exitCode=1 Mar 21 04:21:08 crc kubenswrapper[4923]: I0321 04:21:08.966290 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"a3c917c628db6baeed175000981dac16ae873338ec2de4eff09c21980a58785e"} Mar 21 04:21:08 crc kubenswrapper[4923]: I0321 04:21:08.967091 4923 scope.go:117] "RemoveContainer" containerID="a3c917c628db6baeed175000981dac16ae873338ec2de4eff09c21980a58785e" Mar 21 04:21:08 crc kubenswrapper[4923]: I0321 04:21:08.967479 4923 status_manager.go:851] "Failed to get status for pod" podUID="3d694cb6-70ab-4c6b-98ee-9aa819980356" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:21:08 crc kubenswrapper[4923]: I0321 04:21:08.967973 4923 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:21:08 crc kubenswrapper[4923]: I0321 04:21:08.968441 4923 status_manager.go:851] "Failed to get status for pod" podUID="3c7ee4e5-1e01-4fba-9ddc-53e671d130bb" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6bf6d879dc-rttfb\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:21:08 crc kubenswrapper[4923]: I0321 04:21:08.969026 4923 status_manager.go:851] "Failed to get status for pod" podUID="9b9ee07c-4061-4cd2-80e7-c70d2c80577e" pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7899f58997-f8wzq\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:21:08 crc kubenswrapper[4923]: I0321 04:21:08.969572 4923 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:21:08 crc kubenswrapper[4923]: I0321 04:21:08.971009 4923 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="7c6dfc81dca0d929f8e32779c9cc51dcf1408b3a5bdbffba10bd1e0e61f56fcc" exitCode=0 Mar 21 04:21:08 crc kubenswrapper[4923]: I0321 04:21:08.971124 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"7c6dfc81dca0d929f8e32779c9cc51dcf1408b3a5bdbffba10bd1e0e61f56fcc"} Mar 21 04:21:08 crc kubenswrapper[4923]: I0321 04:21:08.971190 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"712bf58009a1ac8e8469faec54bb71a8d7bdf8b3dfea089fde5e7a0786643415"} Mar 21 04:21:08 crc kubenswrapper[4923]: I0321 04:21:08.971711 4923 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f63007f3-ef94-44f5-a6c4-bc456777eb89" Mar 21 04:21:08 crc kubenswrapper[4923]: I0321 04:21:08.971749 4923 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f63007f3-ef94-44f5-a6c4-bc456777eb89" Mar 21 04:21:08 crc kubenswrapper[4923]: I0321 04:21:08.972154 4923 status_manager.go:851] "Failed to get status for pod" podUID="3c7ee4e5-1e01-4fba-9ddc-53e671d130bb" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6bf6d879dc-rttfb\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:21:08 crc kubenswrapper[4923]: E0321 04:21:08.972186 4923 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:21:08 crc kubenswrapper[4923]: E0321 04:21:08.972864 4923 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="7s" Mar 21 04:21:08 crc kubenswrapper[4923]: I0321 04:21:08.972992 4923 status_manager.go:851] "Failed to get status for pod" podUID="9b9ee07c-4061-4cd2-80e7-c70d2c80577e" pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7899f58997-f8wzq\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:21:08 crc kubenswrapper[4923]: I0321 04:21:08.973406 4923 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:21:08 crc kubenswrapper[4923]: I0321 04:21:08.973840 4923 status_manager.go:851] "Failed to get status for pod" podUID="3d694cb6-70ab-4c6b-98ee-9aa819980356" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:21:08 crc kubenswrapper[4923]: I0321 04:21:08.974236 4923 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Mar 21 04:21:09 crc kubenswrapper[4923]: I0321 04:21:09.981067 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 21 04:21:09 crc kubenswrapper[4923]: I0321 04:21:09.982342 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 21 04:21:09 crc kubenswrapper[4923]: I0321 04:21:09.982491 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8fdb39a51aa340a63165883905aaf9544da5b175caa45a6647bc424fb19f0711"} Mar 21 04:21:09 crc kubenswrapper[4923]: I0321 04:21:09.985613 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a879c7ceca6f00a1ac5b1700f094f6d486e3ae9013e1187115490f3aee04e8cc"} Mar 21 04:21:09 crc kubenswrapper[4923]: I0321 04:21:09.985642 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b33bd3a0aa916e59bee529aa3d79ab12d86b257ad9ff87c04a2fd5413be5f3c5"} Mar 21 04:21:09 crc kubenswrapper[4923]: I0321 04:21:09.985656 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a198eb71beaa78c1b300760d51ddb4b7219f1ada0925834456e232ad9ea1f4ca"} Mar 21 04:21:09 crc kubenswrapper[4923]: I0321 04:21:09.985666 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"916a07896c25cc057a6fadef65a5c07da6fbecf3fde4b58853b612fee91821ef"} Mar 21 04:21:10 crc kubenswrapper[4923]: I0321 04:21:10.996614 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"05167cb9339db388e364b641d26c6ea0a0ef75fd9d42ff5d3bbdf63b1fca367c"} Mar 21 04:21:10 crc kubenswrapper[4923]: I0321 04:21:10.997117 4923 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f63007f3-ef94-44f5-a6c4-bc456777eb89" Mar 21 04:21:10 crc kubenswrapper[4923]: I0321 04:21:10.997166 4923 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f63007f3-ef94-44f5-a6c4-bc456777eb89" Mar 21 04:21:10 crc kubenswrapper[4923]: I0321 04:21:10.997546 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:21:11 crc kubenswrapper[4923]: I0321 04:21:11.758428 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" podUID="9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d" containerName="oauth-openshift" containerID="cri-o://62197133d500745148f5c95f6941c62c92b9d16ad3db98763e0f8fdfefde6de5" gracePeriod=15 Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.005957 4923 generic.go:334] "Generic (PLEG): container finished" podID="9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d" containerID="62197133d500745148f5c95f6941c62c92b9d16ad3db98763e0f8fdfefde6de5" exitCode=0 Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.006008 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" event={"ID":"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d","Type":"ContainerDied","Data":"62197133d500745148f5c95f6941c62c92b9d16ad3db98763e0f8fdfefde6de5"} Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.143797 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.282793 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs2bv\" (UniqueName: \"kubernetes.io/projected/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-kube-api-access-bs2bv\") pod \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.282848 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-audit-policies\") pod \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.282889 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-ocp-branding-template\") pod \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.282922 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-router-certs\") pod \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.282969 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-service-ca\") pod \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.282993 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-trusted-ca-bundle\") pod \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.283039 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-session\") pod \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.283080 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-user-idp-0-file-data\") pod \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.283110 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-user-template-error\") pod \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.283135 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-user-template-login\") pod \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.283173 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-serving-cert\") pod \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.283224 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-user-template-provider-selection\") pod \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.283256 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-audit-dir\") pod \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.283289 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-cliconfig\") pod \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\" (UID: \"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d\") " Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.283800 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d" (UID: "9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.284165 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d" (UID: "9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.284157 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d" (UID: "9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.284152 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d" (UID: "9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.284914 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d" (UID: "9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.291557 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d" (UID: "9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.291649 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-kube-api-access-bs2bv" (OuterVolumeSpecName: "kube-api-access-bs2bv") pod "9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d" (UID: "9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d"). InnerVolumeSpecName "kube-api-access-bs2bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.292448 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d" (UID: "9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.292533 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d" (UID: "9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.292930 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d" (UID: "9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.293295 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d" (UID: "9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.293765 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d" (UID: "9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.294166 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d" (UID: "9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.295857 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d" (UID: "9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.386109 4923 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.386167 4923 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.386189 4923 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.386210 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs2bv\" (UniqueName: \"kubernetes.io/projected/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-kube-api-access-bs2bv\") on node \"crc\" DevicePath \"\"" Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.386229 4923 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.386246 4923 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.386268 4923 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.386286 4923 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.386303 4923 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.386348 4923 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.386369 4923 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.386386 4923 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.386404 4923 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 21 04:21:12 crc kubenswrapper[4923]: I0321 04:21:12.386422 4923 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:21:13 crc kubenswrapper[4923]: I0321 04:21:13.013439 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" event={"ID":"9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d","Type":"ContainerDied","Data":"d436a9a720d77342e5cf55028ec063d96119d24b46ccd3ce38d736c2e8048a64"} Mar 21 04:21:13 crc kubenswrapper[4923]: I0321 04:21:13.013520 4923 scope.go:117] "RemoveContainer" containerID="62197133d500745148f5c95f6941c62c92b9d16ad3db98763e0f8fdfefde6de5" Mar 21 04:21:13 crc kubenswrapper[4923]: I0321 04:21:13.013520 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5jpr6" Mar 21 04:21:13 crc kubenswrapper[4923]: I0321 04:21:13.385510 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:21:13 crc kubenswrapper[4923]: I0321 04:21:13.385939 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:21:13 crc kubenswrapper[4923]: I0321 04:21:13.392577 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:21:13 crc kubenswrapper[4923]: I0321 04:21:13.603055 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 04:21:13 crc kubenswrapper[4923]: I0321 04:21:13.607887 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 04:21:14 crc kubenswrapper[4923]: I0321 04:21:14.025082 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 04:21:15 crc kubenswrapper[4923]: I0321 04:21:15.285187 4923 patch_prober.go:28] interesting pod/route-controller-manager-6bf6d879dc-rttfb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 21 04:21:15 crc kubenswrapper[4923]: I0321 04:21:15.285280 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" podUID="3c7ee4e5-1e01-4fba-9ddc-53e671d130bb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 21 04:21:16 crc kubenswrapper[4923]: I0321 04:21:16.007673 4923 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:21:16 crc kubenswrapper[4923]: I0321 04:21:16.036445 4923 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f63007f3-ef94-44f5-a6c4-bc456777eb89" Mar 21 04:21:16 crc kubenswrapper[4923]: I0321 04:21:16.036471 4923 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f63007f3-ef94-44f5-a6c4-bc456777eb89" Mar 21 04:21:16 crc kubenswrapper[4923]: I0321 04:21:16.044875 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:21:16 crc kubenswrapper[4923]: I0321 04:21:16.394596 4923 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="32edffab-3b81-4a02-b347-a154871aefaa" Mar 21 04:21:16 crc kubenswrapper[4923]: E0321 04:21:16.403293 4923 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError" Mar 21 04:21:17 crc kubenswrapper[4923]: I0321 04:21:17.046190 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver-check-endpoints/0.log" Mar 21 04:21:17 crc kubenswrapper[4923]: I0321 04:21:17.049564 4923 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="05167cb9339db388e364b641d26c6ea0a0ef75fd9d42ff5d3bbdf63b1fca367c" exitCode=255 Mar 21 04:21:17 crc kubenswrapper[4923]: I0321 04:21:17.049632 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"05167cb9339db388e364b641d26c6ea0a0ef75fd9d42ff5d3bbdf63b1fca367c"} Mar 21 04:21:17 crc kubenswrapper[4923]: I0321 04:21:17.050123 4923 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f63007f3-ef94-44f5-a6c4-bc456777eb89" Mar 21 04:21:17 crc kubenswrapper[4923]: I0321 04:21:17.050179 4923 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f63007f3-ef94-44f5-a6c4-bc456777eb89" Mar 21 04:21:17 crc kubenswrapper[4923]: I0321 04:21:17.053266 4923 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="32edffab-3b81-4a02-b347-a154871aefaa" Mar 21 04:21:17 crc kubenswrapper[4923]: I0321 04:21:17.054964 4923 scope.go:117] "RemoveContainer" containerID="05167cb9339db388e364b641d26c6ea0a0ef75fd9d42ff5d3bbdf63b1fca367c" Mar 21 04:21:18 crc kubenswrapper[4923]: I0321 04:21:18.059470 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver-check-endpoints/0.log" Mar 21 04:21:18 crc kubenswrapper[4923]: I0321 04:21:18.061737 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6d77d7b6384c8096cfc9e176b271f3091e644504ed3d547f47f12086fbe4fe28"} Mar 21 04:21:18 crc kubenswrapper[4923]: I0321 04:21:18.062037 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:21:18 crc kubenswrapper[4923]: I0321 04:21:18.062265 4923 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f63007f3-ef94-44f5-a6c4-bc456777eb89" Mar 21 04:21:18 crc kubenswrapper[4923]: I0321 04:21:18.062309 4923 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f63007f3-ef94-44f5-a6c4-bc456777eb89" Mar 21 04:21:18 crc kubenswrapper[4923]: I0321 04:21:18.064939 4923 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="32edffab-3b81-4a02-b347-a154871aefaa" Mar 21 04:21:19 crc kubenswrapper[4923]: I0321 04:21:19.086615 4923 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f63007f3-ef94-44f5-a6c4-bc456777eb89" Mar 21 04:21:19 crc kubenswrapper[4923]: I0321 04:21:19.086675 4923 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f63007f3-ef94-44f5-a6c4-bc456777eb89" Mar 21 04:21:19 crc kubenswrapper[4923]: I0321 04:21:19.090717 4923 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="32edffab-3b81-4a02-b347-a154871aefaa" Mar 21 04:21:24 crc kubenswrapper[4923]: I0321 04:21:24.377150 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 21 04:21:25 crc kubenswrapper[4923]: I0321 04:21:25.047663 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 21 04:21:25 crc kubenswrapper[4923]: I0321 04:21:25.284181 4923 patch_prober.go:28] interesting pod/route-controller-manager-6bf6d879dc-rttfb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 21 04:21:25 crc kubenswrapper[4923]: I0321 04:21:25.284271 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" podUID="3c7ee4e5-1e01-4fba-9ddc-53e671d130bb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 21 04:21:25 crc kubenswrapper[4923]: I0321 04:21:25.627605 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 21 04:21:26 crc kubenswrapper[4923]: I0321 04:21:26.142198 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6bf6d879dc-rttfb_3c7ee4e5-1e01-4fba-9ddc-53e671d130bb/route-controller-manager/0.log" Mar 21 04:21:26 crc kubenswrapper[4923]: I0321 04:21:26.142249 4923 generic.go:334] "Generic (PLEG): container finished" podID="3c7ee4e5-1e01-4fba-9ddc-53e671d130bb" containerID="d60c28096ba606744e99de672da6c349b48e5c58334e040b97a87e1ef9f03867" exitCode=255 Mar 21 04:21:26 crc kubenswrapper[4923]: I0321 04:21:26.142277 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" event={"ID":"3c7ee4e5-1e01-4fba-9ddc-53e671d130bb","Type":"ContainerDied","Data":"d60c28096ba606744e99de672da6c349b48e5c58334e040b97a87e1ef9f03867"} Mar 21 04:21:26 crc kubenswrapper[4923]: I0321 04:21:26.142737 4923 scope.go:117] "RemoveContainer" containerID="d60c28096ba606744e99de672da6c349b48e5c58334e040b97a87e1ef9f03867" Mar 21 04:21:26 crc kubenswrapper[4923]: I0321 04:21:26.197568 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 21 04:21:26 crc kubenswrapper[4923]: I0321 04:21:26.221857 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 21 04:21:26 crc kubenswrapper[4923]: I0321 04:21:26.336077 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 21 04:21:26 crc kubenswrapper[4923]: I0321 04:21:26.460317 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 21 04:21:27 crc kubenswrapper[4923]: I0321 04:21:27.139429 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 21 04:21:27 crc kubenswrapper[4923]: I0321 04:21:27.149841 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6bf6d879dc-rttfb_3c7ee4e5-1e01-4fba-9ddc-53e671d130bb/route-controller-manager/0.log" Mar 21 04:21:27 crc kubenswrapper[4923]: I0321 04:21:27.149898 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" event={"ID":"3c7ee4e5-1e01-4fba-9ddc-53e671d130bb","Type":"ContainerStarted","Data":"e21b7c1449758d8dd383ecf6c3cde54981d7ba337c9670f3ceda41d450f902bb"} Mar 21 04:21:27 crc kubenswrapper[4923]: I0321 04:21:27.150396 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" Mar 21 04:21:27 crc kubenswrapper[4923]: I0321 04:21:27.325572 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 21 04:21:27 crc kubenswrapper[4923]: I0321 04:21:27.577826 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 21 04:21:27 crc kubenswrapper[4923]: I0321 04:21:27.582267 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 21 04:21:27 crc kubenswrapper[4923]: I0321 04:21:27.600904 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 21 04:21:27 crc kubenswrapper[4923]: I0321 04:21:27.666660 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 21 04:21:27 crc kubenswrapper[4923]: I0321 04:21:27.863563 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 21 04:21:27 crc kubenswrapper[4923]: I0321 04:21:27.909400 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 21 04:21:28 crc kubenswrapper[4923]: I0321 04:21:28.079797 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 21 04:21:28 crc kubenswrapper[4923]: I0321 04:21:28.150290 4923 patch_prober.go:28] interesting pod/route-controller-manager-6bf6d879dc-rttfb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 21 04:21:28 crc kubenswrapper[4923]: I0321 04:21:28.150410 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" podUID="3c7ee4e5-1e01-4fba-9ddc-53e671d130bb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 21 04:21:28 crc kubenswrapper[4923]: I0321 04:21:28.150553 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 21 04:21:28 crc kubenswrapper[4923]: I0321 04:21:28.216687 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 21 04:21:28 crc kubenswrapper[4923]: I0321 04:21:28.248969 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 21 04:21:28 crc kubenswrapper[4923]: I0321 04:21:28.265359 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 21 04:21:28 crc kubenswrapper[4923]: I0321 04:21:28.453314 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 21 04:21:28 crc kubenswrapper[4923]: I0321 04:21:28.472793 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 21 04:21:28 crc kubenswrapper[4923]: I0321 04:21:28.644675 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 21 04:21:28 crc kubenswrapper[4923]: I0321 04:21:28.648252 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 21 04:21:28 crc kubenswrapper[4923]: I0321 04:21:28.703699 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 21 04:21:28 crc kubenswrapper[4923]: I0321 04:21:28.782869 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 21 04:21:28 crc kubenswrapper[4923]: I0321 04:21:28.813515 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 21 04:21:28 crc kubenswrapper[4923]: I0321 04:21:28.835299 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 21 04:21:28 crc kubenswrapper[4923]: I0321 04:21:28.908043 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 21 04:21:29 crc kubenswrapper[4923]: I0321 04:21:29.089058 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 21 04:21:29 crc kubenswrapper[4923]: I0321 04:21:29.154176 4923 patch_prober.go:28] interesting pod/route-controller-manager-6bf6d879dc-rttfb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 21 04:21:29 crc kubenswrapper[4923]: I0321 04:21:29.154278 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" podUID="3c7ee4e5-1e01-4fba-9ddc-53e671d130bb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 21 04:21:29 crc kubenswrapper[4923]: I0321 04:21:29.284682 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 21 04:21:29 crc kubenswrapper[4923]: I0321 04:21:29.298564 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 21 04:21:29 crc kubenswrapper[4923]: I0321 04:21:29.332991 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 21 04:21:29 crc kubenswrapper[4923]: I0321 04:21:29.370589 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 21 04:21:29 crc kubenswrapper[4923]: I0321 04:21:29.392715 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 21 04:21:29 crc kubenswrapper[4923]: I0321 04:21:29.398269 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 21 04:21:29 crc kubenswrapper[4923]: I0321 04:21:29.417911 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 21 04:21:29 crc kubenswrapper[4923]: I0321 04:21:29.463595 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 21 04:21:29 crc kubenswrapper[4923]: I0321 04:21:29.468361 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 21 04:21:29 crc kubenswrapper[4923]: I0321 04:21:29.500463 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 21 04:21:29 crc kubenswrapper[4923]: I0321 04:21:29.717004 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 21 04:21:29 crc kubenswrapper[4923]: I0321 04:21:29.725308 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 21 04:21:29 crc kubenswrapper[4923]: I0321 04:21:29.765533 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 21 04:21:29 crc kubenswrapper[4923]: I0321 04:21:29.842675 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 21 04:21:29 crc kubenswrapper[4923]: I0321 04:21:29.944142 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 21 04:21:30 crc kubenswrapper[4923]: I0321 04:21:30.027031 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 21 04:21:30 crc kubenswrapper[4923]: I0321 04:21:30.112468 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 21 04:21:30 crc kubenswrapper[4923]: I0321 04:21:30.200376 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 21 04:21:30 crc kubenswrapper[4923]: I0321 04:21:30.202769 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 21 04:21:30 crc kubenswrapper[4923]: I0321 04:21:30.262807 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 21 04:21:30 crc kubenswrapper[4923]: I0321 04:21:30.351537 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 21 04:21:30 crc kubenswrapper[4923]: I0321 04:21:30.391433 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 21 04:21:30 crc kubenswrapper[4923]: I0321 04:21:30.460251 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 21 04:21:30 crc kubenswrapper[4923]: I0321 04:21:30.496493 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 21 04:21:30 crc kubenswrapper[4923]: I0321 04:21:30.511310 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 21 04:21:30 crc kubenswrapper[4923]: I0321 04:21:30.578160 4923 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 21 04:21:30 crc kubenswrapper[4923]: I0321 04:21:30.675068 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 21 04:21:30 crc kubenswrapper[4923]: I0321 04:21:30.725976 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 21 04:21:30 crc kubenswrapper[4923]: I0321 04:21:30.747822 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 21 04:21:30 crc kubenswrapper[4923]: I0321 04:21:30.787214 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 21 04:21:30 crc kubenswrapper[4923]: I0321 04:21:30.982261 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 21 04:21:31 crc kubenswrapper[4923]: I0321 04:21:31.251586 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 21 04:21:31 crc kubenswrapper[4923]: I0321 04:21:31.254967 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 21 04:21:31 crc kubenswrapper[4923]: I0321 04:21:31.408731 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 21 04:21:31 crc kubenswrapper[4923]: I0321 04:21:31.430784 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 21 04:21:31 crc kubenswrapper[4923]: I0321 04:21:31.490248 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 21 04:21:31 crc kubenswrapper[4923]: I0321 04:21:31.547441 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 21 04:21:31 crc kubenswrapper[4923]: I0321 04:21:31.899437 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 21 04:21:31 crc kubenswrapper[4923]: I0321 04:21:31.909205 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 21 04:21:31 crc kubenswrapper[4923]: I0321 04:21:31.923159 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 21 04:21:31 crc kubenswrapper[4923]: I0321 04:21:31.946801 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 21 04:21:32 crc kubenswrapper[4923]: I0321 04:21:32.001150 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 21 04:21:32 crc kubenswrapper[4923]: I0321 04:21:32.004722 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 21 04:21:32 crc kubenswrapper[4923]: I0321 04:21:32.006039 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 21 04:21:32 crc kubenswrapper[4923]: I0321 04:21:32.006642 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 21 04:21:32 crc kubenswrapper[4923]: I0321 04:21:32.011531 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 21 04:21:32 crc kubenswrapper[4923]: I0321 04:21:32.032700 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 21 04:21:32 crc kubenswrapper[4923]: I0321 04:21:32.047713 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 21 04:21:32 crc kubenswrapper[4923]: I0321 04:21:32.051045 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 21 04:21:32 crc kubenswrapper[4923]: I0321 04:21:32.159972 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 21 04:21:32 crc kubenswrapper[4923]: I0321 04:21:32.175232 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 21 04:21:32 crc kubenswrapper[4923]: I0321 04:21:32.263670 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 21 04:21:32 crc kubenswrapper[4923]: I0321 04:21:32.463862 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 21 04:21:32 crc kubenswrapper[4923]: I0321 04:21:32.487259 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 21 04:21:32 crc kubenswrapper[4923]: I0321 04:21:32.518574 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 21 04:21:32 crc kubenswrapper[4923]: I0321 04:21:32.566906 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 21 04:21:32 crc kubenswrapper[4923]: I0321 04:21:32.594289 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 21 04:21:32 crc kubenswrapper[4923]: I0321 04:21:32.660836 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 21 04:21:32 crc kubenswrapper[4923]: I0321 04:21:32.711231 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 21 04:21:32 crc kubenswrapper[4923]: I0321 04:21:32.721390 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 21 04:21:32 crc kubenswrapper[4923]: I0321 04:21:32.801967 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 21 04:21:32 crc kubenswrapper[4923]: I0321 04:21:32.878417 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 21 04:21:32 crc kubenswrapper[4923]: I0321 04:21:32.953176 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 21 04:21:33 crc kubenswrapper[4923]: I0321 04:21:33.086399 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 21 04:21:33 crc kubenswrapper[4923]: I0321 04:21:33.136206 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 21 04:21:33 crc kubenswrapper[4923]: I0321 04:21:33.167661 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 21 04:21:33 crc kubenswrapper[4923]: I0321 04:21:33.178210 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 21 04:21:33 crc kubenswrapper[4923]: I0321 04:21:33.193152 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 21 04:21:33 crc kubenswrapper[4923]: I0321 04:21:33.236315 4923 patch_prober.go:28] interesting pod/machine-config-daemon-cv5gr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 04:21:33 crc kubenswrapper[4923]: I0321 04:21:33.236469 4923 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 04:21:33 crc kubenswrapper[4923]: I0321 04:21:33.545829 4923 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 21 04:21:33 crc kubenswrapper[4923]: I0321 04:21:33.641067 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 21 04:21:33 crc kubenswrapper[4923]: I0321 04:21:33.708904 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 21 04:21:33 crc kubenswrapper[4923]: I0321 04:21:33.709636 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 21 04:21:33 crc kubenswrapper[4923]: I0321 04:21:33.828824 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 21 04:21:33 crc kubenswrapper[4923]: I0321 04:21:33.920402 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 21 04:21:33 crc kubenswrapper[4923]: I0321 04:21:33.964198 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 21 04:21:34 crc kubenswrapper[4923]: I0321 04:21:34.003628 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 21 04:21:34 crc kubenswrapper[4923]: I0321 04:21:34.071989 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 21 04:21:34 crc kubenswrapper[4923]: I0321 04:21:34.084024 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 21 04:21:34 crc kubenswrapper[4923]: I0321 04:21:34.192393 4923 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 21 04:21:34 crc kubenswrapper[4923]: I0321 04:21:34.210111 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 21 04:21:34 crc kubenswrapper[4923]: I0321 04:21:34.229713 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 21 04:21:34 crc kubenswrapper[4923]: I0321 04:21:34.237603 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 21 04:21:34 crc kubenswrapper[4923]: I0321 04:21:34.287769 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" Mar 21 04:21:34 crc kubenswrapper[4923]: I0321 04:21:34.426079 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 21 04:21:34 crc kubenswrapper[4923]: I0321 04:21:34.519295 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 21 04:21:34 crc kubenswrapper[4923]: I0321 04:21:34.526724 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 21 04:21:34 crc kubenswrapper[4923]: I0321 04:21:34.564857 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 21 04:21:34 crc kubenswrapper[4923]: I0321 04:21:34.601510 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 21 04:21:34 crc kubenswrapper[4923]: I0321 04:21:34.628337 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 21 04:21:34 crc kubenswrapper[4923]: I0321 04:21:34.654304 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 21 04:21:34 crc kubenswrapper[4923]: I0321 04:21:34.752593 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 21 04:21:34 crc kubenswrapper[4923]: I0321 04:21:34.811860 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 21 04:21:34 crc kubenswrapper[4923]: I0321 04:21:34.858752 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 21 04:21:34 crc kubenswrapper[4923]: I0321 04:21:34.887713 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 21 04:21:34 crc kubenswrapper[4923]: I0321 04:21:34.952579 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 21 04:21:34 crc kubenswrapper[4923]: I0321 04:21:34.975898 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 21 04:21:35 crc kubenswrapper[4923]: I0321 04:21:35.043530 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 21 04:21:35 crc kubenswrapper[4923]: I0321 04:21:35.083579 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 21 04:21:35 crc kubenswrapper[4923]: I0321 04:21:35.084962 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 21 04:21:35 crc kubenswrapper[4923]: I0321 04:21:35.189875 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 21 04:21:35 crc kubenswrapper[4923]: I0321 04:21:35.200422 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 21 04:21:35 crc kubenswrapper[4923]: I0321 04:21:35.220888 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 21 04:21:35 crc kubenswrapper[4923]: I0321 04:21:35.243557 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 21 04:21:35 crc kubenswrapper[4923]: I0321 04:21:35.352560 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 21 04:21:35 crc kubenswrapper[4923]: I0321 04:21:35.531939 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 21 04:21:35 crc kubenswrapper[4923]: I0321 04:21:35.581101 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 21 04:21:35 crc kubenswrapper[4923]: I0321 04:21:35.621215 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 21 04:21:35 crc kubenswrapper[4923]: I0321 04:21:35.678005 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 21 04:21:35 crc kubenswrapper[4923]: I0321 04:21:35.792755 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 21 04:21:35 crc kubenswrapper[4923]: I0321 04:21:35.915290 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 21 04:21:36 crc kubenswrapper[4923]: I0321 04:21:36.044978 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 21 04:21:36 crc kubenswrapper[4923]: I0321 04:21:36.190808 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 21 04:21:36 crc kubenswrapper[4923]: I0321 04:21:36.250425 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 21 04:21:36 crc kubenswrapper[4923]: I0321 04:21:36.283486 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 21 04:21:36 crc kubenswrapper[4923]: I0321 04:21:36.335616 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 21 04:21:36 crc kubenswrapper[4923]: I0321 04:21:36.482920 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 21 04:21:36 crc kubenswrapper[4923]: I0321 04:21:36.483435 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 21 04:21:36 crc kubenswrapper[4923]: I0321 04:21:36.520373 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 21 04:21:36 crc kubenswrapper[4923]: I0321 04:21:36.523238 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 21 04:21:36 crc kubenswrapper[4923]: I0321 04:21:36.650578 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 21 04:21:36 crc kubenswrapper[4923]: I0321 04:21:36.749056 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 21 04:21:36 crc kubenswrapper[4923]: I0321 04:21:36.797780 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 21 04:21:36 crc kubenswrapper[4923]: I0321 04:21:36.799006 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 21 04:21:36 crc kubenswrapper[4923]: I0321 04:21:36.846289 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 21 04:21:36 crc kubenswrapper[4923]: I0321 04:21:36.906430 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 21 04:21:36 crc kubenswrapper[4923]: I0321 04:21:36.997279 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 21 04:21:37 crc kubenswrapper[4923]: I0321 04:21:37.173788 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 21 04:21:37 crc kubenswrapper[4923]: I0321 04:21:37.173826 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 21 04:21:37 crc kubenswrapper[4923]: I0321 04:21:37.177659 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 21 04:21:37 crc kubenswrapper[4923]: I0321 04:21:37.212804 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 21 04:21:37 crc kubenswrapper[4923]: I0321 04:21:37.256875 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 21 04:21:37 crc kubenswrapper[4923]: I0321 04:21:37.274531 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 21 04:21:37 crc kubenswrapper[4923]: I0321 04:21:37.307364 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 21 04:21:37 crc kubenswrapper[4923]: I0321 04:21:37.309879 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 21 04:21:37 crc kubenswrapper[4923]: I0321 04:21:37.345992 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 21 04:21:37 crc kubenswrapper[4923]: I0321 04:21:37.351381 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 21 04:21:37 crc kubenswrapper[4923]: I0321 04:21:37.463171 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 21 04:21:37 crc kubenswrapper[4923]: I0321 04:21:37.653651 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 21 04:21:37 crc kubenswrapper[4923]: I0321 04:21:37.676306 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 21 04:21:37 crc kubenswrapper[4923]: I0321 04:21:37.693126 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 21 04:21:37 crc kubenswrapper[4923]: I0321 04:21:37.696649 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 21 04:21:37 crc kubenswrapper[4923]: I0321 04:21:37.778503 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 21 04:21:37 crc kubenswrapper[4923]: I0321 04:21:37.807536 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 21 04:21:37 crc kubenswrapper[4923]: I0321 04:21:37.836416 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 21 04:21:37 crc kubenswrapper[4923]: I0321 04:21:37.846528 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 21 04:21:37 crc kubenswrapper[4923]: I0321 04:21:37.889477 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 21 04:21:37 crc kubenswrapper[4923]: I0321 04:21:37.899465 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 21 04:21:37 crc kubenswrapper[4923]: I0321 04:21:37.957522 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 21 04:21:37 crc kubenswrapper[4923]: I0321 04:21:37.996644 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.128951 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.130348 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.148224 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.258966 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.260389 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.283002 4923 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.283480 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7899f58997-f8wzq" podStartSLOduration=46.283453755 podStartE2EDuration="46.283453755s" podCreationTimestamp="2026-03-21 04:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:21:15.759523671 +0000 UTC m=+240.912534758" watchObservedRunningTime="2026-03-21 04:21:38.283453755 +0000 UTC m=+263.436464892" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.287763 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=43.287748148 podStartE2EDuration="43.287748148s" podCreationTimestamp="2026-03-21 04:20:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:21:15.706352902 +0000 UTC m=+240.859364009" watchObservedRunningTime="2026-03-21 04:21:38.287748148 +0000 UTC m=+263.440759275" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.289289 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6bf6d879dc-rttfb" podStartSLOduration=46.289273945 podStartE2EDuration="46.289273945s" podCreationTimestamp="2026-03-21 04:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:21:15.739022849 +0000 UTC m=+240.892033946" watchObservedRunningTime="2026-03-21 04:21:38.289273945 +0000 UTC m=+263.442285072" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.291427 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5jpr6","openshift-kube-apiserver/kube-apiserver-crc"] Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.291505 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-5784f4bc78-gxqxp"] Mar 21 04:21:38 crc kubenswrapper[4923]: E0321 04:21:38.291814 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d694cb6-70ab-4c6b-98ee-9aa819980356" containerName="installer" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.291837 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d694cb6-70ab-4c6b-98ee-9aa819980356" containerName="installer" Mar 21 04:21:38 crc kubenswrapper[4923]: E0321 04:21:38.291858 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d" containerName="oauth-openshift" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.291874 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d" containerName="oauth-openshift" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.291947 4923 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f63007f3-ef94-44f5-a6c4-bc456777eb89" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.291981 4923 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f63007f3-ef94-44f5-a6c4-bc456777eb89" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.292049 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d" containerName="oauth-openshift" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.292079 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d694cb6-70ab-4c6b-98ee-9aa819980356" containerName="installer" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.292780 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.295984 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.296047 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.296513 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.297414 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.297913 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.298316 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.298445 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.298844 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.299090 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.299557 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.299816 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.303890 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.304541 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.315846 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.318851 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.326869 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.329232 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.329210115 podStartE2EDuration="22.329210115s" podCreationTimestamp="2026-03-21 04:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:21:38.329113843 +0000 UTC m=+263.482124990" watchObservedRunningTime="2026-03-21 04:21:38.329210115 +0000 UTC m=+263.482221223" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.343170 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.373567 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d" path="/var/lib/kubelet/pods/9cdbe23d-5e4e-4eff-bb14-3bd9094ae97d/volumes" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.410822 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.429119 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.429209 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.429261 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-system-service-ca\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.429301 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.429362 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ea74896e-c567-4b73-be7a-45e166e7ab35-audit-policies\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.429405 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.429574 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea74896e-c567-4b73-be7a-45e166e7ab35-audit-dir\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.429642 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-user-template-error\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.429677 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-user-template-login\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.429706 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-system-session\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.429749 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-system-router-certs\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.429771 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qbnf\" (UniqueName: \"kubernetes.io/projected/ea74896e-c567-4b73-be7a-45e166e7ab35-kube-api-access-5qbnf\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.429981 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.430071 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.462364 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.530747 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-user-template-error\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.530799 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-user-template-login\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.530834 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-system-session\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.530862 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-system-router-certs\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.530883 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qbnf\" (UniqueName: \"kubernetes.io/projected/ea74896e-c567-4b73-be7a-45e166e7ab35-kube-api-access-5qbnf\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.530921 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.530952 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.530987 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.531016 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.531047 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-system-service-ca\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.531071 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.531091 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ea74896e-c567-4b73-be7a-45e166e7ab35-audit-policies\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.531117 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.531161 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea74896e-c567-4b73-be7a-45e166e7ab35-audit-dir\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.531238 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea74896e-c567-4b73-be7a-45e166e7ab35-audit-dir\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.532464 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.532779 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ea74896e-c567-4b73-be7a-45e166e7ab35-audit-policies\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.533091 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-system-service-ca\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.533429 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.536887 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.536999 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-system-session\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.538333 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.545921 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-user-template-error\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.553366 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-system-router-certs\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.553697 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.553958 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-user-template-login\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.560135 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qbnf\" (UniqueName: \"kubernetes.io/projected/ea74896e-c567-4b73-be7a-45e166e7ab35-kube-api-access-5qbnf\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.561516 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ea74896e-c567-4b73-be7a-45e166e7ab35-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5784f4bc78-gxqxp\" (UID: \"ea74896e-c567-4b73-be7a-45e166e7ab35\") " pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.613764 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.634843 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.795689 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.798130 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.866087 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.871479 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5784f4bc78-gxqxp"] Mar 21 04:21:38 crc kubenswrapper[4923]: I0321 04:21:38.955772 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 21 04:21:39 crc kubenswrapper[4923]: I0321 04:21:39.042757 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 21 04:21:39 crc kubenswrapper[4923]: I0321 04:21:39.053598 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 21 04:21:39 crc kubenswrapper[4923]: I0321 04:21:39.067590 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 21 04:21:39 crc kubenswrapper[4923]: I0321 04:21:39.145025 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 21 04:21:39 crc kubenswrapper[4923]: I0321 04:21:39.163762 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 21 04:21:39 crc kubenswrapper[4923]: I0321 04:21:39.220598 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" event={"ID":"ea74896e-c567-4b73-be7a-45e166e7ab35","Type":"ContainerStarted","Data":"cdf7776e6de7113f8ceed1f362f985685a7e7027c9be0bb5b02e2c9bb9e6409e"} Mar 21 04:21:39 crc kubenswrapper[4923]: I0321 04:21:39.349977 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 21 04:21:39 crc kubenswrapper[4923]: I0321 04:21:39.446867 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 21 04:21:39 crc kubenswrapper[4923]: I0321 04:21:39.457618 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 21 04:21:39 crc kubenswrapper[4923]: I0321 04:21:39.605591 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 21 04:21:39 crc kubenswrapper[4923]: I0321 04:21:39.616120 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 21 04:21:39 crc kubenswrapper[4923]: I0321 04:21:39.625036 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 21 04:21:39 crc kubenswrapper[4923]: I0321 04:21:39.649810 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 21 04:21:39 crc kubenswrapper[4923]: I0321 04:21:39.670638 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 21 04:21:39 crc kubenswrapper[4923]: I0321 04:21:39.705156 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 21 04:21:39 crc kubenswrapper[4923]: I0321 04:21:39.749569 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 21 04:21:39 crc kubenswrapper[4923]: I0321 04:21:39.752202 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 21 04:21:39 crc kubenswrapper[4923]: I0321 04:21:39.772831 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 21 04:21:39 crc kubenswrapper[4923]: I0321 04:21:39.809550 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 21 04:21:40 crc kubenswrapper[4923]: I0321 04:21:40.004396 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 21 04:21:40 crc kubenswrapper[4923]: I0321 04:21:40.140450 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 21 04:21:40 crc kubenswrapper[4923]: I0321 04:21:40.229009 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" event={"ID":"ea74896e-c567-4b73-be7a-45e166e7ab35","Type":"ContainerStarted","Data":"c528351634bb04d7aedfbfc90285ef23e3ab9ce4fe04153a2a4f6904b8bd54ed"} Mar 21 04:21:40 crc kubenswrapper[4923]: I0321 04:21:40.229353 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:40 crc kubenswrapper[4923]: I0321 04:21:40.235965 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" Mar 21 04:21:40 crc kubenswrapper[4923]: I0321 04:21:40.252978 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5784f4bc78-gxqxp" podStartSLOduration=54.252953968 podStartE2EDuration="54.252953968s" podCreationTimestamp="2026-03-21 04:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:21:40.251559645 +0000 UTC m=+265.404570732" watchObservedRunningTime="2026-03-21 04:21:40.252953968 +0000 UTC m=+265.405965065" Mar 21 04:21:40 crc kubenswrapper[4923]: I0321 04:21:40.428441 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 21 04:21:40 crc kubenswrapper[4923]: I0321 04:21:40.623070 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 21 04:21:40 crc kubenswrapper[4923]: I0321 04:21:40.648998 4923 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 21 04:21:40 crc kubenswrapper[4923]: I0321 04:21:40.792264 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 21 04:21:41 crc kubenswrapper[4923]: I0321 04:21:41.071805 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 21 04:21:41 crc kubenswrapper[4923]: I0321 04:21:41.106393 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 21 04:21:41 crc kubenswrapper[4923]: I0321 04:21:41.430656 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 21 04:21:41 crc kubenswrapper[4923]: I0321 04:21:41.464248 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 21 04:21:41 crc kubenswrapper[4923]: I0321 04:21:41.536923 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 21 04:21:41 crc kubenswrapper[4923]: I0321 04:21:41.787898 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 21 04:21:41 crc kubenswrapper[4923]: I0321 04:21:41.913521 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 21 04:21:41 crc kubenswrapper[4923]: I0321 04:21:41.914149 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 21 04:21:41 crc kubenswrapper[4923]: I0321 04:21:41.919855 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 21 04:21:41 crc kubenswrapper[4923]: I0321 04:21:41.996821 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 21 04:21:42 crc kubenswrapper[4923]: I0321 04:21:42.042881 4923 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 21 04:21:42 crc kubenswrapper[4923]: I0321 04:21:42.043612 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 21 04:21:42 crc kubenswrapper[4923]: I0321 04:21:42.461031 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 21 04:21:42 crc kubenswrapper[4923]: I0321 04:21:42.565727 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 21 04:21:42 crc kubenswrapper[4923]: I0321 04:21:42.581502 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 21 04:21:42 crc kubenswrapper[4923]: I0321 04:21:42.645071 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 21 04:21:42 crc kubenswrapper[4923]: I0321 04:21:42.906903 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 21 04:21:42 crc kubenswrapper[4923]: I0321 04:21:42.941273 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 21 04:21:43 crc kubenswrapper[4923]: I0321 04:21:43.291618 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 21 04:21:43 crc kubenswrapper[4923]: I0321 04:21:43.312983 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 21 04:21:44 crc kubenswrapper[4923]: I0321 04:21:44.159106 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 21 04:21:45 crc kubenswrapper[4923]: I0321 04:21:45.646150 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 21 04:21:49 crc kubenswrapper[4923]: I0321 04:21:49.739752 4923 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 21 04:21:49 crc kubenswrapper[4923]: I0321 04:21:49.740239 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://0bb5a00c56f27497688f44231653e776caa9673c3d8a657c36dc3a5f06867724" gracePeriod=5 Mar 21 04:21:55 crc kubenswrapper[4923]: I0321 04:21:55.330909 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 21 04:21:55 crc kubenswrapper[4923]: I0321 04:21:55.331387 4923 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="0bb5a00c56f27497688f44231653e776caa9673c3d8a657c36dc3a5f06867724" exitCode=137 Mar 21 04:21:55 crc kubenswrapper[4923]: I0321 04:21:55.331427 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6f67b42a171a04ac9c0efe28cba1a68d8e076ece98fa6d0b86f3e6d75939260" Mar 21 04:21:55 crc kubenswrapper[4923]: I0321 04:21:55.356182 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 21 04:21:55 crc kubenswrapper[4923]: I0321 04:21:55.356358 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 04:21:55 crc kubenswrapper[4923]: I0321 04:21:55.488246 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 21 04:21:55 crc kubenswrapper[4923]: I0321 04:21:55.488403 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 21 04:21:55 crc kubenswrapper[4923]: I0321 04:21:55.488470 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 21 04:21:55 crc kubenswrapper[4923]: I0321 04:21:55.488510 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 21 04:21:55 crc kubenswrapper[4923]: I0321 04:21:55.488544 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 21 04:21:55 crc kubenswrapper[4923]: I0321 04:21:55.488980 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:21:55 crc kubenswrapper[4923]: I0321 04:21:55.489190 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:21:55 crc kubenswrapper[4923]: I0321 04:21:55.489189 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:21:55 crc kubenswrapper[4923]: I0321 04:21:55.489249 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:21:55 crc kubenswrapper[4923]: I0321 04:21:55.498696 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:21:55 crc kubenswrapper[4923]: I0321 04:21:55.590508 4923 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 21 04:21:55 crc kubenswrapper[4923]: I0321 04:21:55.590544 4923 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 21 04:21:55 crc kubenswrapper[4923]: I0321 04:21:55.590556 4923 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Mar 21 04:21:55 crc kubenswrapper[4923]: I0321 04:21:55.590564 4923 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 21 04:21:55 crc kubenswrapper[4923]: I0321 04:21:55.590574 4923 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Mar 21 04:21:56 crc kubenswrapper[4923]: I0321 04:21:56.338618 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 21 04:21:56 crc kubenswrapper[4923]: I0321 04:21:56.365233 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Mar 21 04:21:56 crc kubenswrapper[4923]: I0321 04:21:56.365547 4923 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Mar 21 04:21:56 crc kubenswrapper[4923]: I0321 04:21:56.378209 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 21 04:21:56 crc kubenswrapper[4923]: I0321 04:21:56.378246 4923 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6545f284-fa34-4496-a369-e9fa3c60104d" Mar 21 04:21:56 crc kubenswrapper[4923]: I0321 04:21:56.381129 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 21 04:21:56 crc kubenswrapper[4923]: I0321 04:21:56.381158 4923 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6545f284-fa34-4496-a369-e9fa3c60104d" Mar 21 04:22:00 crc kubenswrapper[4923]: I0321 04:22:00.209849 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567782-b2xpp"] Mar 21 04:22:00 crc kubenswrapper[4923]: E0321 04:22:00.210160 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 21 04:22:00 crc kubenswrapper[4923]: I0321 04:22:00.210180 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 21 04:22:00 crc kubenswrapper[4923]: I0321 04:22:00.210426 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 21 04:22:00 crc kubenswrapper[4923]: I0321 04:22:00.211200 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567782-b2xpp" Mar 21 04:22:00 crc kubenswrapper[4923]: I0321 04:22:00.213412 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 04:22:00 crc kubenswrapper[4923]: I0321 04:22:00.213694 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 04:22:00 crc kubenswrapper[4923]: I0321 04:22:00.214136 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8trfl" Mar 21 04:22:00 crc kubenswrapper[4923]: I0321 04:22:00.222489 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567782-b2xpp"] Mar 21 04:22:00 crc kubenswrapper[4923]: I0321 04:22:00.356887 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zngwk\" (UniqueName: \"kubernetes.io/projected/717a82ca-ac10-41e8-9dae-ba2ce0aec329-kube-api-access-zngwk\") pod \"auto-csr-approver-29567782-b2xpp\" (UID: \"717a82ca-ac10-41e8-9dae-ba2ce0aec329\") " pod="openshift-infra/auto-csr-approver-29567782-b2xpp" Mar 21 04:22:00 crc kubenswrapper[4923]: I0321 04:22:00.459014 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zngwk\" (UniqueName: \"kubernetes.io/projected/717a82ca-ac10-41e8-9dae-ba2ce0aec329-kube-api-access-zngwk\") pod \"auto-csr-approver-29567782-b2xpp\" (UID: \"717a82ca-ac10-41e8-9dae-ba2ce0aec329\") " pod="openshift-infra/auto-csr-approver-29567782-b2xpp" Mar 21 04:22:00 crc kubenswrapper[4923]: I0321 04:22:00.482435 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zngwk\" (UniqueName: \"kubernetes.io/projected/717a82ca-ac10-41e8-9dae-ba2ce0aec329-kube-api-access-zngwk\") pod \"auto-csr-approver-29567782-b2xpp\" (UID: \"717a82ca-ac10-41e8-9dae-ba2ce0aec329\") " pod="openshift-infra/auto-csr-approver-29567782-b2xpp" Mar 21 04:22:00 crc kubenswrapper[4923]: I0321 04:22:00.534686 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567782-b2xpp" Mar 21 04:22:00 crc kubenswrapper[4923]: I0321 04:22:00.984122 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567782-b2xpp"] Mar 21 04:22:01 crc kubenswrapper[4923]: I0321 04:22:01.374295 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567782-b2xpp" event={"ID":"717a82ca-ac10-41e8-9dae-ba2ce0aec329","Type":"ContainerStarted","Data":"9b25acac16c327c92332b87c486a453fc21fb59d8b087e335fad14e7bd7f807b"} Mar 21 04:22:02 crc kubenswrapper[4923]: I0321 04:22:02.384237 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567782-b2xpp" event={"ID":"717a82ca-ac10-41e8-9dae-ba2ce0aec329","Type":"ContainerStarted","Data":"01cda3afc2c88a1f73cc8dbfd9ec31a44d2c09a620bf520936c20ba613690756"} Mar 21 04:22:03 crc kubenswrapper[4923]: I0321 04:22:03.086133 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29567782-b2xpp" podStartSLOduration=2.109700098 podStartE2EDuration="3.086118443s" podCreationTimestamp="2026-03-21 04:22:00 +0000 UTC" firstStartedPulling="2026-03-21 04:22:00.994944639 +0000 UTC m=+286.147955776" lastFinishedPulling="2026-03-21 04:22:01.971363014 +0000 UTC m=+287.124374121" observedRunningTime="2026-03-21 04:22:02.403209665 +0000 UTC m=+287.556220822" watchObservedRunningTime="2026-03-21 04:22:03.086118443 +0000 UTC m=+288.239129530" Mar 21 04:22:03 crc kubenswrapper[4923]: I0321 04:22:03.088777 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m57bq"] Mar 21 04:22:03 crc kubenswrapper[4923]: I0321 04:22:03.088996 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m57bq" podUID="e8d52ff5-e444-4d7d-952f-6d95888a7791" containerName="registry-server" containerID="cri-o://4da8c14e907bb9e82ae352f621c5ae8bee61b898cb1217698674f4ee753d987d" gracePeriod=2 Mar 21 04:22:03 crc kubenswrapper[4923]: I0321 04:22:03.236195 4923 patch_prober.go:28] interesting pod/machine-config-daemon-cv5gr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 04:22:03 crc kubenswrapper[4923]: I0321 04:22:03.236267 4923 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 04:22:03 crc kubenswrapper[4923]: I0321 04:22:03.236335 4923 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" Mar 21 04:22:03 crc kubenswrapper[4923]: I0321 04:22:03.237530 4923 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"76f6e7c342bff59611eaa01ba2ed57f1a12fe75dab0e15a48267ea824f439b6f"} pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 21 04:22:03 crc kubenswrapper[4923]: I0321 04:22:03.237597 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" containerID="cri-o://76f6e7c342bff59611eaa01ba2ed57f1a12fe75dab0e15a48267ea824f439b6f" gracePeriod=600 Mar 21 04:22:03 crc kubenswrapper[4923]: I0321 04:22:03.397238 4923 generic.go:334] "Generic (PLEG): container finished" podID="717a82ca-ac10-41e8-9dae-ba2ce0aec329" containerID="01cda3afc2c88a1f73cc8dbfd9ec31a44d2c09a620bf520936c20ba613690756" exitCode=0 Mar 21 04:22:03 crc kubenswrapper[4923]: I0321 04:22:03.397304 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567782-b2xpp" event={"ID":"717a82ca-ac10-41e8-9dae-ba2ce0aec329","Type":"ContainerDied","Data":"01cda3afc2c88a1f73cc8dbfd9ec31a44d2c09a620bf520936c20ba613690756"} Mar 21 04:22:03 crc kubenswrapper[4923]: I0321 04:22:03.400966 4923 generic.go:334] "Generic (PLEG): container finished" podID="e8d52ff5-e444-4d7d-952f-6d95888a7791" containerID="4da8c14e907bb9e82ae352f621c5ae8bee61b898cb1217698674f4ee753d987d" exitCode=0 Mar 21 04:22:03 crc kubenswrapper[4923]: I0321 04:22:03.401009 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m57bq" event={"ID":"e8d52ff5-e444-4d7d-952f-6d95888a7791","Type":"ContainerDied","Data":"4da8c14e907bb9e82ae352f621c5ae8bee61b898cb1217698674f4ee753d987d"} Mar 21 04:22:03 crc kubenswrapper[4923]: I0321 04:22:03.415638 4923 generic.go:334] "Generic (PLEG): container finished" podID="34cdf206-b121-415c-ae40-21245192e724" containerID="76f6e7c342bff59611eaa01ba2ed57f1a12fe75dab0e15a48267ea824f439b6f" exitCode=0 Mar 21 04:22:03 crc kubenswrapper[4923]: I0321 04:22:03.415719 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" event={"ID":"34cdf206-b121-415c-ae40-21245192e724","Type":"ContainerDied","Data":"76f6e7c342bff59611eaa01ba2ed57f1a12fe75dab0e15a48267ea824f439b6f"} Mar 21 04:22:03 crc kubenswrapper[4923]: I0321 04:22:03.501914 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m57bq" Mar 21 04:22:03 crc kubenswrapper[4923]: I0321 04:22:03.597667 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d52ff5-e444-4d7d-952f-6d95888a7791-utilities\") pod \"e8d52ff5-e444-4d7d-952f-6d95888a7791\" (UID: \"e8d52ff5-e444-4d7d-952f-6d95888a7791\") " Mar 21 04:22:03 crc kubenswrapper[4923]: I0321 04:22:03.597770 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lxb5\" (UniqueName: \"kubernetes.io/projected/e8d52ff5-e444-4d7d-952f-6d95888a7791-kube-api-access-4lxb5\") pod \"e8d52ff5-e444-4d7d-952f-6d95888a7791\" (UID: \"e8d52ff5-e444-4d7d-952f-6d95888a7791\") " Mar 21 04:22:03 crc kubenswrapper[4923]: I0321 04:22:03.597830 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d52ff5-e444-4d7d-952f-6d95888a7791-catalog-content\") pod \"e8d52ff5-e444-4d7d-952f-6d95888a7791\" (UID: \"e8d52ff5-e444-4d7d-952f-6d95888a7791\") " Mar 21 04:22:03 crc kubenswrapper[4923]: I0321 04:22:03.598691 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8d52ff5-e444-4d7d-952f-6d95888a7791-utilities" (OuterVolumeSpecName: "utilities") pod "e8d52ff5-e444-4d7d-952f-6d95888a7791" (UID: "e8d52ff5-e444-4d7d-952f-6d95888a7791"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:22:03 crc kubenswrapper[4923]: I0321 04:22:03.603449 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8d52ff5-e444-4d7d-952f-6d95888a7791-kube-api-access-4lxb5" (OuterVolumeSpecName: "kube-api-access-4lxb5") pod "e8d52ff5-e444-4d7d-952f-6d95888a7791" (UID: "e8d52ff5-e444-4d7d-952f-6d95888a7791"). InnerVolumeSpecName "kube-api-access-4lxb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:22:03 crc kubenswrapper[4923]: I0321 04:22:03.673082 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8d52ff5-e444-4d7d-952f-6d95888a7791-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8d52ff5-e444-4d7d-952f-6d95888a7791" (UID: "e8d52ff5-e444-4d7d-952f-6d95888a7791"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:22:03 crc kubenswrapper[4923]: I0321 04:22:03.699776 4923 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d52ff5-e444-4d7d-952f-6d95888a7791-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 04:22:03 crc kubenswrapper[4923]: I0321 04:22:03.699833 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lxb5\" (UniqueName: \"kubernetes.io/projected/e8d52ff5-e444-4d7d-952f-6d95888a7791-kube-api-access-4lxb5\") on node \"crc\" DevicePath \"\"" Mar 21 04:22:03 crc kubenswrapper[4923]: I0321 04:22:03.699852 4923 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d52ff5-e444-4d7d-952f-6d95888a7791-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 04:22:04 crc kubenswrapper[4923]: I0321 04:22:04.424908 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m57bq" Mar 21 04:22:04 crc kubenswrapper[4923]: I0321 04:22:04.424919 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m57bq" event={"ID":"e8d52ff5-e444-4d7d-952f-6d95888a7791","Type":"ContainerDied","Data":"61688def02468bb744888e24c27d3c6c835fa8d21087524a0debc82fdcaf5407"} Mar 21 04:22:04 crc kubenswrapper[4923]: I0321 04:22:04.425713 4923 scope.go:117] "RemoveContainer" containerID="4da8c14e907bb9e82ae352f621c5ae8bee61b898cb1217698674f4ee753d987d" Mar 21 04:22:04 crc kubenswrapper[4923]: I0321 04:22:04.429868 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" event={"ID":"34cdf206-b121-415c-ae40-21245192e724","Type":"ContainerStarted","Data":"ff44cc77171b74283f670b22d040687ae1a94cc7da74c4f35593b3a93ed4a47d"} Mar 21 04:22:04 crc kubenswrapper[4923]: I0321 04:22:04.446784 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m57bq"] Mar 21 04:22:04 crc kubenswrapper[4923]: I0321 04:22:04.447377 4923 scope.go:117] "RemoveContainer" containerID="1510692a0afe19b1c0959f4ba675af3e189d5c3ad770df05a20ffb018cbd7a3f" Mar 21 04:22:04 crc kubenswrapper[4923]: I0321 04:22:04.450535 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m57bq"] Mar 21 04:22:04 crc kubenswrapper[4923]: I0321 04:22:04.471467 4923 scope.go:117] "RemoveContainer" containerID="4b1a63022d3397bf3ed5e71f93af80fc215147df45e10a535460038e255f543b" Mar 21 04:22:04 crc kubenswrapper[4923]: I0321 04:22:04.679034 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567782-b2xpp" Mar 21 04:22:04 crc kubenswrapper[4923]: I0321 04:22:04.816995 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zngwk\" (UniqueName: \"kubernetes.io/projected/717a82ca-ac10-41e8-9dae-ba2ce0aec329-kube-api-access-zngwk\") pod \"717a82ca-ac10-41e8-9dae-ba2ce0aec329\" (UID: \"717a82ca-ac10-41e8-9dae-ba2ce0aec329\") " Mar 21 04:22:04 crc kubenswrapper[4923]: I0321 04:22:04.821975 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/717a82ca-ac10-41e8-9dae-ba2ce0aec329-kube-api-access-zngwk" (OuterVolumeSpecName: "kube-api-access-zngwk") pod "717a82ca-ac10-41e8-9dae-ba2ce0aec329" (UID: "717a82ca-ac10-41e8-9dae-ba2ce0aec329"). InnerVolumeSpecName "kube-api-access-zngwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:22:04 crc kubenswrapper[4923]: I0321 04:22:04.918740 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zngwk\" (UniqueName: \"kubernetes.io/projected/717a82ca-ac10-41e8-9dae-ba2ce0aec329-kube-api-access-zngwk\") on node \"crc\" DevicePath \"\"" Mar 21 04:22:05 crc kubenswrapper[4923]: I0321 04:22:05.439653 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567782-b2xpp" event={"ID":"717a82ca-ac10-41e8-9dae-ba2ce0aec329","Type":"ContainerDied","Data":"9b25acac16c327c92332b87c486a453fc21fb59d8b087e335fad14e7bd7f807b"} Mar 21 04:22:05 crc kubenswrapper[4923]: I0321 04:22:05.439758 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b25acac16c327c92332b87c486a453fc21fb59d8b087e335fad14e7bd7f807b" Mar 21 04:22:05 crc kubenswrapper[4923]: I0321 04:22:05.439678 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567782-b2xpp" Mar 21 04:22:06 crc kubenswrapper[4923]: I0321 04:22:06.368951 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8d52ff5-e444-4d7d-952f-6d95888a7791" path="/var/lib/kubelet/pods/e8d52ff5-e444-4d7d-952f-6d95888a7791/volumes" Mar 21 04:22:47 crc kubenswrapper[4923]: I0321 04:22:47.779820 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-75dj8"] Mar 21 04:22:47 crc kubenswrapper[4923]: E0321 04:22:47.780752 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d52ff5-e444-4d7d-952f-6d95888a7791" containerName="extract-content" Mar 21 04:22:47 crc kubenswrapper[4923]: I0321 04:22:47.780769 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d52ff5-e444-4d7d-952f-6d95888a7791" containerName="extract-content" Mar 21 04:22:47 crc kubenswrapper[4923]: E0321 04:22:47.780782 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d52ff5-e444-4d7d-952f-6d95888a7791" containerName="extract-utilities" Mar 21 04:22:47 crc kubenswrapper[4923]: I0321 04:22:47.780790 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d52ff5-e444-4d7d-952f-6d95888a7791" containerName="extract-utilities" Mar 21 04:22:47 crc kubenswrapper[4923]: E0321 04:22:47.780803 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d52ff5-e444-4d7d-952f-6d95888a7791" containerName="registry-server" Mar 21 04:22:47 crc kubenswrapper[4923]: I0321 04:22:47.780810 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d52ff5-e444-4d7d-952f-6d95888a7791" containerName="registry-server" Mar 21 04:22:47 crc kubenswrapper[4923]: E0321 04:22:47.780821 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="717a82ca-ac10-41e8-9dae-ba2ce0aec329" containerName="oc" Mar 21 04:22:47 crc kubenswrapper[4923]: I0321 04:22:47.780828 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="717a82ca-ac10-41e8-9dae-ba2ce0aec329" containerName="oc" Mar 21 04:22:47 crc kubenswrapper[4923]: I0321 04:22:47.780945 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="717a82ca-ac10-41e8-9dae-ba2ce0aec329" containerName="oc" Mar 21 04:22:47 crc kubenswrapper[4923]: I0321 04:22:47.780965 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8d52ff5-e444-4d7d-952f-6d95888a7791" containerName="registry-server" Mar 21 04:22:47 crc kubenswrapper[4923]: I0321 04:22:47.781445 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" Mar 21 04:22:47 crc kubenswrapper[4923]: I0321 04:22:47.792997 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-75dj8"] Mar 21 04:22:47 crc kubenswrapper[4923]: I0321 04:22:47.892555 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f90dc830-da66-4fc2-88e3-7e21d67a459c-registry-certificates\") pod \"image-registry-66df7c8f76-75dj8\" (UID: \"f90dc830-da66-4fc2-88e3-7e21d67a459c\") " pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" Mar 21 04:22:47 crc kubenswrapper[4923]: I0321 04:22:47.892668 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbdgc\" (UniqueName: \"kubernetes.io/projected/f90dc830-da66-4fc2-88e3-7e21d67a459c-kube-api-access-tbdgc\") pod \"image-registry-66df7c8f76-75dj8\" (UID: \"f90dc830-da66-4fc2-88e3-7e21d67a459c\") " pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" Mar 21 04:22:47 crc kubenswrapper[4923]: I0321 04:22:47.892715 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f90dc830-da66-4fc2-88e3-7e21d67a459c-registry-tls\") pod \"image-registry-66df7c8f76-75dj8\" (UID: \"f90dc830-da66-4fc2-88e3-7e21d67a459c\") " pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" Mar 21 04:22:47 crc kubenswrapper[4923]: I0321 04:22:47.892758 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-75dj8\" (UID: \"f90dc830-da66-4fc2-88e3-7e21d67a459c\") " pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" Mar 21 04:22:47 crc kubenswrapper[4923]: I0321 04:22:47.892797 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f90dc830-da66-4fc2-88e3-7e21d67a459c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-75dj8\" (UID: \"f90dc830-da66-4fc2-88e3-7e21d67a459c\") " pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" Mar 21 04:22:47 crc kubenswrapper[4923]: I0321 04:22:47.892949 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f90dc830-da66-4fc2-88e3-7e21d67a459c-bound-sa-token\") pod \"image-registry-66df7c8f76-75dj8\" (UID: \"f90dc830-da66-4fc2-88e3-7e21d67a459c\") " pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" Mar 21 04:22:47 crc kubenswrapper[4923]: I0321 04:22:47.892996 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f90dc830-da66-4fc2-88e3-7e21d67a459c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-75dj8\" (UID: \"f90dc830-da66-4fc2-88e3-7e21d67a459c\") " pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" Mar 21 04:22:47 crc kubenswrapper[4923]: I0321 04:22:47.893019 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f90dc830-da66-4fc2-88e3-7e21d67a459c-trusted-ca\") pod \"image-registry-66df7c8f76-75dj8\" (UID: \"f90dc830-da66-4fc2-88e3-7e21d67a459c\") " pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" Mar 21 04:22:47 crc kubenswrapper[4923]: I0321 04:22:47.920798 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-75dj8\" (UID: \"f90dc830-da66-4fc2-88e3-7e21d67a459c\") " pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" Mar 21 04:22:47 crc kubenswrapper[4923]: I0321 04:22:47.994070 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f90dc830-da66-4fc2-88e3-7e21d67a459c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-75dj8\" (UID: \"f90dc830-da66-4fc2-88e3-7e21d67a459c\") " pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" Mar 21 04:22:47 crc kubenswrapper[4923]: I0321 04:22:47.994124 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f90dc830-da66-4fc2-88e3-7e21d67a459c-trusted-ca\") pod \"image-registry-66df7c8f76-75dj8\" (UID: \"f90dc830-da66-4fc2-88e3-7e21d67a459c\") " pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" Mar 21 04:22:47 crc kubenswrapper[4923]: I0321 04:22:47.994191 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f90dc830-da66-4fc2-88e3-7e21d67a459c-registry-certificates\") pod \"image-registry-66df7c8f76-75dj8\" (UID: \"f90dc830-da66-4fc2-88e3-7e21d67a459c\") " pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" Mar 21 04:22:47 crc kubenswrapper[4923]: I0321 04:22:47.994242 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbdgc\" (UniqueName: \"kubernetes.io/projected/f90dc830-da66-4fc2-88e3-7e21d67a459c-kube-api-access-tbdgc\") pod \"image-registry-66df7c8f76-75dj8\" (UID: \"f90dc830-da66-4fc2-88e3-7e21d67a459c\") " pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" Mar 21 04:22:47 crc kubenswrapper[4923]: I0321 04:22:47.994271 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f90dc830-da66-4fc2-88e3-7e21d67a459c-registry-tls\") pod \"image-registry-66df7c8f76-75dj8\" (UID: \"f90dc830-da66-4fc2-88e3-7e21d67a459c\") " pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" Mar 21 04:22:47 crc kubenswrapper[4923]: I0321 04:22:47.994295 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f90dc830-da66-4fc2-88e3-7e21d67a459c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-75dj8\" (UID: \"f90dc830-da66-4fc2-88e3-7e21d67a459c\") " pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" Mar 21 04:22:47 crc kubenswrapper[4923]: I0321 04:22:47.994337 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f90dc830-da66-4fc2-88e3-7e21d67a459c-bound-sa-token\") pod \"image-registry-66df7c8f76-75dj8\" (UID: \"f90dc830-da66-4fc2-88e3-7e21d67a459c\") " pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" Mar 21 04:22:47 crc kubenswrapper[4923]: I0321 04:22:47.994974 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f90dc830-da66-4fc2-88e3-7e21d67a459c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-75dj8\" (UID: \"f90dc830-da66-4fc2-88e3-7e21d67a459c\") " pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" Mar 21 04:22:47 crc kubenswrapper[4923]: I0321 04:22:47.995631 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f90dc830-da66-4fc2-88e3-7e21d67a459c-registry-certificates\") pod \"image-registry-66df7c8f76-75dj8\" (UID: \"f90dc830-da66-4fc2-88e3-7e21d67a459c\") " pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" Mar 21 04:22:47 crc kubenswrapper[4923]: I0321 04:22:47.996438 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f90dc830-da66-4fc2-88e3-7e21d67a459c-trusted-ca\") pod \"image-registry-66df7c8f76-75dj8\" (UID: \"f90dc830-da66-4fc2-88e3-7e21d67a459c\") " pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" Mar 21 04:22:48 crc kubenswrapper[4923]: I0321 04:22:48.000885 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f90dc830-da66-4fc2-88e3-7e21d67a459c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-75dj8\" (UID: \"f90dc830-da66-4fc2-88e3-7e21d67a459c\") " pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" Mar 21 04:22:48 crc kubenswrapper[4923]: I0321 04:22:48.002104 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f90dc830-da66-4fc2-88e3-7e21d67a459c-registry-tls\") pod \"image-registry-66df7c8f76-75dj8\" (UID: \"f90dc830-da66-4fc2-88e3-7e21d67a459c\") " pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" Mar 21 04:22:48 crc kubenswrapper[4923]: I0321 04:22:48.018517 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f90dc830-da66-4fc2-88e3-7e21d67a459c-bound-sa-token\") pod \"image-registry-66df7c8f76-75dj8\" (UID: \"f90dc830-da66-4fc2-88e3-7e21d67a459c\") " pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" Mar 21 04:22:48 crc kubenswrapper[4923]: I0321 04:22:48.021709 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbdgc\" (UniqueName: \"kubernetes.io/projected/f90dc830-da66-4fc2-88e3-7e21d67a459c-kube-api-access-tbdgc\") pod \"image-registry-66df7c8f76-75dj8\" (UID: \"f90dc830-da66-4fc2-88e3-7e21d67a459c\") " pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" Mar 21 04:22:48 crc kubenswrapper[4923]: I0321 04:22:48.111624 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" Mar 21 04:22:48 crc kubenswrapper[4923]: I0321 04:22:48.344091 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-75dj8"] Mar 21 04:22:48 crc kubenswrapper[4923]: I0321 04:22:48.717811 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" event={"ID":"f90dc830-da66-4fc2-88e3-7e21d67a459c","Type":"ContainerStarted","Data":"c861734d848fa2f0f5d93e1fa6dc80a959c9fd27027e8458557773cfbbe0d459"} Mar 21 04:22:48 crc kubenswrapper[4923]: I0321 04:22:48.717865 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" event={"ID":"f90dc830-da66-4fc2-88e3-7e21d67a459c","Type":"ContainerStarted","Data":"664d52358a804e719fbeafae12789d12b9bb29a829d4fce0c85f4a7004e70ff6"} Mar 21 04:22:48 crc kubenswrapper[4923]: I0321 04:22:48.718015 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" Mar 21 04:22:48 crc kubenswrapper[4923]: I0321 04:22:48.755782 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" podStartSLOduration=1.755757631 podStartE2EDuration="1.755757631s" podCreationTimestamp="2026-03-21 04:22:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:22:48.75312895 +0000 UTC m=+333.906140077" watchObservedRunningTime="2026-03-21 04:22:48.755757631 +0000 UTC m=+333.908768758" Mar 21 04:23:08 crc kubenswrapper[4923]: I0321 04:23:08.118827 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-75dj8" Mar 21 04:23:08 crc kubenswrapper[4923]: I0321 04:23:08.232342 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bmnqq"] Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.286402 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hq8bw"] Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.289250 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hq8bw" podUID="3dd7868c-7d3e-43de-b0f9-1ab51280fce5" containerName="registry-server" containerID="cri-o://4450280769cf12c05c407f2ca63cd72407151373d7c5a736fe690d61646ef607" gracePeriod=30 Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.295176 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9grdl"] Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.295485 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9grdl" podUID="dd250302-91b0-41c4-b138-89559a78d375" containerName="registry-server" containerID="cri-o://9ed588912e23d419b70fe410dea7a0b562a3006d54a3392b4b08b607411ee39c" gracePeriod=30 Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.307914 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8bxdt"] Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.308956 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-8bxdt" podUID="39dc2e68-1df7-426f-aa50-15a542f6995b" containerName="marketplace-operator" containerID="cri-o://eb28bff817234d4421e9a8e6a6071fdcf9037af713796c7c75118de6cfe81d4c" gracePeriod=30 Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.322487 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5twmg"] Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.322779 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5twmg" podUID="3b1d4e5a-6b46-4203-ba69-6440844e48ad" containerName="registry-server" containerID="cri-o://832ae6b4a689235e2e1a8930d72538fd8d2e1bcf385eca0fbfcb2159a5a78acd" gracePeriod=30 Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.330077 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lcp5h"] Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.330379 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lcp5h" podUID="ded4d513-cc92-405c-8009-913f9aa7ea5f" containerName="registry-server" containerID="cri-o://7abf8fd90cfbc646ecb4876e1c2c40036d7c3c5b7d2c26d9692662a4875b8bcd" gracePeriod=30 Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.346387 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nj2n7"] Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.347427 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nj2n7" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.356213 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nj2n7"] Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.422980 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bba19ab1-fbf2-4a6f-a481-45e06896f9cd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nj2n7\" (UID: \"bba19ab1-fbf2-4a6f-a481-45e06896f9cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-nj2n7" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.423090 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bba19ab1-fbf2-4a6f-a481-45e06896f9cd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nj2n7\" (UID: \"bba19ab1-fbf2-4a6f-a481-45e06896f9cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-nj2n7" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.423117 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfcmf\" (UniqueName: \"kubernetes.io/projected/bba19ab1-fbf2-4a6f-a481-45e06896f9cd-kube-api-access-sfcmf\") pod \"marketplace-operator-79b997595-nj2n7\" (UID: \"bba19ab1-fbf2-4a6f-a481-45e06896f9cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-nj2n7" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.523810 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bba19ab1-fbf2-4a6f-a481-45e06896f9cd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nj2n7\" (UID: \"bba19ab1-fbf2-4a6f-a481-45e06896f9cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-nj2n7" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.524723 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bba19ab1-fbf2-4a6f-a481-45e06896f9cd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nj2n7\" (UID: \"bba19ab1-fbf2-4a6f-a481-45e06896f9cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-nj2n7" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.524785 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfcmf\" (UniqueName: \"kubernetes.io/projected/bba19ab1-fbf2-4a6f-a481-45e06896f9cd-kube-api-access-sfcmf\") pod \"marketplace-operator-79b997595-nj2n7\" (UID: \"bba19ab1-fbf2-4a6f-a481-45e06896f9cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-nj2n7" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.525864 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bba19ab1-fbf2-4a6f-a481-45e06896f9cd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nj2n7\" (UID: \"bba19ab1-fbf2-4a6f-a481-45e06896f9cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-nj2n7" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.529306 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bba19ab1-fbf2-4a6f-a481-45e06896f9cd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nj2n7\" (UID: \"bba19ab1-fbf2-4a6f-a481-45e06896f9cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-nj2n7" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.539025 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfcmf\" (UniqueName: \"kubernetes.io/projected/bba19ab1-fbf2-4a6f-a481-45e06896f9cd-kube-api-access-sfcmf\") pod \"marketplace-operator-79b997595-nj2n7\" (UID: \"bba19ab1-fbf2-4a6f-a481-45e06896f9cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-nj2n7" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.766625 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nj2n7" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.785209 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hq8bw" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.792721 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5twmg" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.820654 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9grdl" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.825774 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lcp5h" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.828695 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dd7868c-7d3e-43de-b0f9-1ab51280fce5-catalog-content\") pod \"3dd7868c-7d3e-43de-b0f9-1ab51280fce5\" (UID: \"3dd7868c-7d3e-43de-b0f9-1ab51280fce5\") " Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.828741 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b1d4e5a-6b46-4203-ba69-6440844e48ad-catalog-content\") pod \"3b1d4e5a-6b46-4203-ba69-6440844e48ad\" (UID: \"3b1d4e5a-6b46-4203-ba69-6440844e48ad\") " Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.828794 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b1d4e5a-6b46-4203-ba69-6440844e48ad-utilities\") pod \"3b1d4e5a-6b46-4203-ba69-6440844e48ad\" (UID: \"3b1d4e5a-6b46-4203-ba69-6440844e48ad\") " Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.828824 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dd7868c-7d3e-43de-b0f9-1ab51280fce5-utilities\") pod \"3dd7868c-7d3e-43de-b0f9-1ab51280fce5\" (UID: \"3dd7868c-7d3e-43de-b0f9-1ab51280fce5\") " Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.828848 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k86bc\" (UniqueName: \"kubernetes.io/projected/3b1d4e5a-6b46-4203-ba69-6440844e48ad-kube-api-access-k86bc\") pod \"3b1d4e5a-6b46-4203-ba69-6440844e48ad\" (UID: \"3b1d4e5a-6b46-4203-ba69-6440844e48ad\") " Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.828867 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zbhj\" (UniqueName: \"kubernetes.io/projected/3dd7868c-7d3e-43de-b0f9-1ab51280fce5-kube-api-access-8zbhj\") pod \"3dd7868c-7d3e-43de-b0f9-1ab51280fce5\" (UID: \"3dd7868c-7d3e-43de-b0f9-1ab51280fce5\") " Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.841214 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b1d4e5a-6b46-4203-ba69-6440844e48ad-utilities" (OuterVolumeSpecName: "utilities") pod "3b1d4e5a-6b46-4203-ba69-6440844e48ad" (UID: "3b1d4e5a-6b46-4203-ba69-6440844e48ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.842026 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3dd7868c-7d3e-43de-b0f9-1ab51280fce5-utilities" (OuterVolumeSpecName: "utilities") pod "3dd7868c-7d3e-43de-b0f9-1ab51280fce5" (UID: "3dd7868c-7d3e-43de-b0f9-1ab51280fce5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.848845 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8bxdt" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.864504 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dd7868c-7d3e-43de-b0f9-1ab51280fce5-kube-api-access-8zbhj" (OuterVolumeSpecName: "kube-api-access-8zbhj") pod "3dd7868c-7d3e-43de-b0f9-1ab51280fce5" (UID: "3dd7868c-7d3e-43de-b0f9-1ab51280fce5"). InnerVolumeSpecName "kube-api-access-8zbhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.867665 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b1d4e5a-6b46-4203-ba69-6440844e48ad-kube-api-access-k86bc" (OuterVolumeSpecName: "kube-api-access-k86bc") pod "3b1d4e5a-6b46-4203-ba69-6440844e48ad" (UID: "3b1d4e5a-6b46-4203-ba69-6440844e48ad"). InnerVolumeSpecName "kube-api-access-k86bc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.897467 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b1d4e5a-6b46-4203-ba69-6440844e48ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b1d4e5a-6b46-4203-ba69-6440844e48ad" (UID: "3b1d4e5a-6b46-4203-ba69-6440844e48ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.911307 4923 generic.go:334] "Generic (PLEG): container finished" podID="dd250302-91b0-41c4-b138-89559a78d375" containerID="9ed588912e23d419b70fe410dea7a0b562a3006d54a3392b4b08b607411ee39c" exitCode=0 Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.911407 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9grdl" event={"ID":"dd250302-91b0-41c4-b138-89559a78d375","Type":"ContainerDied","Data":"9ed588912e23d419b70fe410dea7a0b562a3006d54a3392b4b08b607411ee39c"} Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.911438 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9grdl" event={"ID":"dd250302-91b0-41c4-b138-89559a78d375","Type":"ContainerDied","Data":"b07fecc856dcd3606b8e2e69a289615b35387edc534f536fce52145ed6546fb9"} Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.911460 4923 scope.go:117] "RemoveContainer" containerID="9ed588912e23d419b70fe410dea7a0b562a3006d54a3392b4b08b607411ee39c" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.911592 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9grdl" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.922126 4923 generic.go:334] "Generic (PLEG): container finished" podID="3b1d4e5a-6b46-4203-ba69-6440844e48ad" containerID="832ae6b4a689235e2e1a8930d72538fd8d2e1bcf385eca0fbfcb2159a5a78acd" exitCode=0 Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.922190 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5twmg" event={"ID":"3b1d4e5a-6b46-4203-ba69-6440844e48ad","Type":"ContainerDied","Data":"832ae6b4a689235e2e1a8930d72538fd8d2e1bcf385eca0fbfcb2159a5a78acd"} Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.922216 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5twmg" event={"ID":"3b1d4e5a-6b46-4203-ba69-6440844e48ad","Type":"ContainerDied","Data":"9e64ad2a3528079f52d762c3aed972254347ceaaf0f9c36c4391a196d37f54a5"} Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.922283 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5twmg" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.930240 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3dd7868c-7d3e-43de-b0f9-1ab51280fce5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3dd7868c-7d3e-43de-b0f9-1ab51280fce5" (UID: "3dd7868c-7d3e-43de-b0f9-1ab51280fce5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.930668 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgtj6\" (UniqueName: \"kubernetes.io/projected/39dc2e68-1df7-426f-aa50-15a542f6995b-kube-api-access-cgtj6\") pod \"39dc2e68-1df7-426f-aa50-15a542f6995b\" (UID: \"39dc2e68-1df7-426f-aa50-15a542f6995b\") " Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.930719 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvsd4\" (UniqueName: \"kubernetes.io/projected/ded4d513-cc92-405c-8009-913f9aa7ea5f-kube-api-access-zvsd4\") pod \"ded4d513-cc92-405c-8009-913f9aa7ea5f\" (UID: \"ded4d513-cc92-405c-8009-913f9aa7ea5f\") " Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.930786 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ded4d513-cc92-405c-8009-913f9aa7ea5f-catalog-content\") pod \"ded4d513-cc92-405c-8009-913f9aa7ea5f\" (UID: \"ded4d513-cc92-405c-8009-913f9aa7ea5f\") " Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.930815 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39dc2e68-1df7-426f-aa50-15a542f6995b-marketplace-trusted-ca\") pod \"39dc2e68-1df7-426f-aa50-15a542f6995b\" (UID: \"39dc2e68-1df7-426f-aa50-15a542f6995b\") " Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.930841 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/39dc2e68-1df7-426f-aa50-15a542f6995b-marketplace-operator-metrics\") pod \"39dc2e68-1df7-426f-aa50-15a542f6995b\" (UID: \"39dc2e68-1df7-426f-aa50-15a542f6995b\") " Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.930883 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ded4d513-cc92-405c-8009-913f9aa7ea5f-utilities\") pod \"ded4d513-cc92-405c-8009-913f9aa7ea5f\" (UID: \"ded4d513-cc92-405c-8009-913f9aa7ea5f\") " Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.931609 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd250302-91b0-41c4-b138-89559a78d375-catalog-content\") pod \"dd250302-91b0-41c4-b138-89559a78d375\" (UID: \"dd250302-91b0-41c4-b138-89559a78d375\") " Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.931659 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z6bp\" (UniqueName: \"kubernetes.io/projected/dd250302-91b0-41c4-b138-89559a78d375-kube-api-access-4z6bp\") pod \"dd250302-91b0-41c4-b138-89559a78d375\" (UID: \"dd250302-91b0-41c4-b138-89559a78d375\") " Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.931778 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd250302-91b0-41c4-b138-89559a78d375-utilities\") pod \"dd250302-91b0-41c4-b138-89559a78d375\" (UID: \"dd250302-91b0-41c4-b138-89559a78d375\") " Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.932131 4923 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dd7868c-7d3e-43de-b0f9-1ab51280fce5-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.932152 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k86bc\" (UniqueName: \"kubernetes.io/projected/3b1d4e5a-6b46-4203-ba69-6440844e48ad-kube-api-access-k86bc\") on node \"crc\" DevicePath \"\"" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.932168 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zbhj\" (UniqueName: \"kubernetes.io/projected/3dd7868c-7d3e-43de-b0f9-1ab51280fce5-kube-api-access-8zbhj\") on node \"crc\" DevicePath \"\"" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.932182 4923 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dd7868c-7d3e-43de-b0f9-1ab51280fce5-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.932193 4923 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b1d4e5a-6b46-4203-ba69-6440844e48ad-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.932204 4923 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b1d4e5a-6b46-4203-ba69-6440844e48ad-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.933066 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd250302-91b0-41c4-b138-89559a78d375-utilities" (OuterVolumeSpecName: "utilities") pod "dd250302-91b0-41c4-b138-89559a78d375" (UID: "dd250302-91b0-41c4-b138-89559a78d375"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.933163 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39dc2e68-1df7-426f-aa50-15a542f6995b-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "39dc2e68-1df7-426f-aa50-15a542f6995b" (UID: "39dc2e68-1df7-426f-aa50-15a542f6995b"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.933385 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ded4d513-cc92-405c-8009-913f9aa7ea5f-utilities" (OuterVolumeSpecName: "utilities") pod "ded4d513-cc92-405c-8009-913f9aa7ea5f" (UID: "ded4d513-cc92-405c-8009-913f9aa7ea5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.936120 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39dc2e68-1df7-426f-aa50-15a542f6995b-kube-api-access-cgtj6" (OuterVolumeSpecName: "kube-api-access-cgtj6") pod "39dc2e68-1df7-426f-aa50-15a542f6995b" (UID: "39dc2e68-1df7-426f-aa50-15a542f6995b"). InnerVolumeSpecName "kube-api-access-cgtj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.942249 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd250302-91b0-41c4-b138-89559a78d375-kube-api-access-4z6bp" (OuterVolumeSpecName: "kube-api-access-4z6bp") pod "dd250302-91b0-41c4-b138-89559a78d375" (UID: "dd250302-91b0-41c4-b138-89559a78d375"). InnerVolumeSpecName "kube-api-access-4z6bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.942386 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ded4d513-cc92-405c-8009-913f9aa7ea5f-kube-api-access-zvsd4" (OuterVolumeSpecName: "kube-api-access-zvsd4") pod "ded4d513-cc92-405c-8009-913f9aa7ea5f" (UID: "ded4d513-cc92-405c-8009-913f9aa7ea5f"). InnerVolumeSpecName "kube-api-access-zvsd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.942459 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39dc2e68-1df7-426f-aa50-15a542f6995b-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "39dc2e68-1df7-426f-aa50-15a542f6995b" (UID: "39dc2e68-1df7-426f-aa50-15a542f6995b"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.943591 4923 generic.go:334] "Generic (PLEG): container finished" podID="ded4d513-cc92-405c-8009-913f9aa7ea5f" containerID="7abf8fd90cfbc646ecb4876e1c2c40036d7c3c5b7d2c26d9692662a4875b8bcd" exitCode=0 Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.943668 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lcp5h" event={"ID":"ded4d513-cc92-405c-8009-913f9aa7ea5f","Type":"ContainerDied","Data":"7abf8fd90cfbc646ecb4876e1c2c40036d7c3c5b7d2c26d9692662a4875b8bcd"} Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.943698 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lcp5h" event={"ID":"ded4d513-cc92-405c-8009-913f9aa7ea5f","Type":"ContainerDied","Data":"db39b272dc807a9978c9652e86064a47413271258a3c3da147e31dc565414414"} Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.943772 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lcp5h" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.948730 4923 generic.go:334] "Generic (PLEG): container finished" podID="39dc2e68-1df7-426f-aa50-15a542f6995b" containerID="eb28bff817234d4421e9a8e6a6071fdcf9037af713796c7c75118de6cfe81d4c" exitCode=0 Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.948826 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8bxdt" event={"ID":"39dc2e68-1df7-426f-aa50-15a542f6995b","Type":"ContainerDied","Data":"eb28bff817234d4421e9a8e6a6071fdcf9037af713796c7c75118de6cfe81d4c"} Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.948882 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8bxdt" event={"ID":"39dc2e68-1df7-426f-aa50-15a542f6995b","Type":"ContainerDied","Data":"a51aafffe547fe0a825780c83bc23eaa3baf3f01a18005ef2c6918b7e3786890"} Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.949000 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8bxdt" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.952441 4923 generic.go:334] "Generic (PLEG): container finished" podID="3dd7868c-7d3e-43de-b0f9-1ab51280fce5" containerID="4450280769cf12c05c407f2ca63cd72407151373d7c5a736fe690d61646ef607" exitCode=0 Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.952477 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hq8bw" event={"ID":"3dd7868c-7d3e-43de-b0f9-1ab51280fce5","Type":"ContainerDied","Data":"4450280769cf12c05c407f2ca63cd72407151373d7c5a736fe690d61646ef607"} Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.952519 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hq8bw" event={"ID":"3dd7868c-7d3e-43de-b0f9-1ab51280fce5","Type":"ContainerDied","Data":"a28fb4fd097cbd182866a26619d80ac9101811f1f0fa01c9335fe758b4fee0ec"} Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.952595 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hq8bw" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.972268 4923 scope.go:117] "RemoveContainer" containerID="88354b6394a1c348f8e9b82d3a6a1b70c22ee2289e53e9875d06e7bcd66625f4" Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.982449 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5twmg"] Mar 21 04:23:11 crc kubenswrapper[4923]: I0321 04:23:11.991920 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5twmg"] Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.003603 4923 scope.go:117] "RemoveContainer" containerID="afc7969cee3de90ae02cb5b465e8ea7e99e722f5f113c777e651812751ac833b" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.016312 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd250302-91b0-41c4-b138-89559a78d375-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd250302-91b0-41c4-b138-89559a78d375" (UID: "dd250302-91b0-41c4-b138-89559a78d375"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.022116 4923 scope.go:117] "RemoveContainer" containerID="9ed588912e23d419b70fe410dea7a0b562a3006d54a3392b4b08b607411ee39c" Mar 21 04:23:12 crc kubenswrapper[4923]: E0321 04:23:12.022780 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ed588912e23d419b70fe410dea7a0b562a3006d54a3392b4b08b607411ee39c\": container with ID starting with 9ed588912e23d419b70fe410dea7a0b562a3006d54a3392b4b08b607411ee39c not found: ID does not exist" containerID="9ed588912e23d419b70fe410dea7a0b562a3006d54a3392b4b08b607411ee39c" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.022825 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ed588912e23d419b70fe410dea7a0b562a3006d54a3392b4b08b607411ee39c"} err="failed to get container status \"9ed588912e23d419b70fe410dea7a0b562a3006d54a3392b4b08b607411ee39c\": rpc error: code = NotFound desc = could not find container \"9ed588912e23d419b70fe410dea7a0b562a3006d54a3392b4b08b607411ee39c\": container with ID starting with 9ed588912e23d419b70fe410dea7a0b562a3006d54a3392b4b08b607411ee39c not found: ID does not exist" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.022843 4923 scope.go:117] "RemoveContainer" containerID="88354b6394a1c348f8e9b82d3a6a1b70c22ee2289e53e9875d06e7bcd66625f4" Mar 21 04:23:12 crc kubenswrapper[4923]: E0321 04:23:12.024567 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88354b6394a1c348f8e9b82d3a6a1b70c22ee2289e53e9875d06e7bcd66625f4\": container with ID starting with 88354b6394a1c348f8e9b82d3a6a1b70c22ee2289e53e9875d06e7bcd66625f4 not found: ID does not exist" containerID="88354b6394a1c348f8e9b82d3a6a1b70c22ee2289e53e9875d06e7bcd66625f4" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.024584 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88354b6394a1c348f8e9b82d3a6a1b70c22ee2289e53e9875d06e7bcd66625f4"} err="failed to get container status \"88354b6394a1c348f8e9b82d3a6a1b70c22ee2289e53e9875d06e7bcd66625f4\": rpc error: code = NotFound desc = could not find container \"88354b6394a1c348f8e9b82d3a6a1b70c22ee2289e53e9875d06e7bcd66625f4\": container with ID starting with 88354b6394a1c348f8e9b82d3a6a1b70c22ee2289e53e9875d06e7bcd66625f4 not found: ID does not exist" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.024596 4923 scope.go:117] "RemoveContainer" containerID="afc7969cee3de90ae02cb5b465e8ea7e99e722f5f113c777e651812751ac833b" Mar 21 04:23:12 crc kubenswrapper[4923]: E0321 04:23:12.026276 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afc7969cee3de90ae02cb5b465e8ea7e99e722f5f113c777e651812751ac833b\": container with ID starting with afc7969cee3de90ae02cb5b465e8ea7e99e722f5f113c777e651812751ac833b not found: ID does not exist" containerID="afc7969cee3de90ae02cb5b465e8ea7e99e722f5f113c777e651812751ac833b" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.026301 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afc7969cee3de90ae02cb5b465e8ea7e99e722f5f113c777e651812751ac833b"} err="failed to get container status \"afc7969cee3de90ae02cb5b465e8ea7e99e722f5f113c777e651812751ac833b\": rpc error: code = NotFound desc = could not find container \"afc7969cee3de90ae02cb5b465e8ea7e99e722f5f113c777e651812751ac833b\": container with ID starting with afc7969cee3de90ae02cb5b465e8ea7e99e722f5f113c777e651812751ac833b not found: ID does not exist" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.026413 4923 scope.go:117] "RemoveContainer" containerID="832ae6b4a689235e2e1a8930d72538fd8d2e1bcf385eca0fbfcb2159a5a78acd" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.027985 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hq8bw"] Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.033870 4923 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd250302-91b0-41c4-b138-89559a78d375-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.033892 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4z6bp\" (UniqueName: \"kubernetes.io/projected/dd250302-91b0-41c4-b138-89559a78d375-kube-api-access-4z6bp\") on node \"crc\" DevicePath \"\"" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.033905 4923 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd250302-91b0-41c4-b138-89559a78d375-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.033913 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgtj6\" (UniqueName: \"kubernetes.io/projected/39dc2e68-1df7-426f-aa50-15a542f6995b-kube-api-access-cgtj6\") on node \"crc\" DevicePath \"\"" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.033921 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvsd4\" (UniqueName: \"kubernetes.io/projected/ded4d513-cc92-405c-8009-913f9aa7ea5f-kube-api-access-zvsd4\") on node \"crc\" DevicePath \"\"" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.033930 4923 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39dc2e68-1df7-426f-aa50-15a542f6995b-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.033938 4923 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/39dc2e68-1df7-426f-aa50-15a542f6995b-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.033946 4923 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ded4d513-cc92-405c-8009-913f9aa7ea5f-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.036467 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hq8bw"] Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.047427 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8bxdt"] Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.049567 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8bxdt"] Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.054528 4923 scope.go:117] "RemoveContainer" containerID="ae4b4e4f20f8bbbc294061eb7b4ce7e0e25fcbf71dc46de1f56194f88e69cf88" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.071460 4923 scope.go:117] "RemoveContainer" containerID="82a16abf8cc9ff812ad09d93671ed8ffd3d126a2db4bdf55ef0464ef39b507e2" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.085908 4923 scope.go:117] "RemoveContainer" containerID="832ae6b4a689235e2e1a8930d72538fd8d2e1bcf385eca0fbfcb2159a5a78acd" Mar 21 04:23:12 crc kubenswrapper[4923]: E0321 04:23:12.086301 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"832ae6b4a689235e2e1a8930d72538fd8d2e1bcf385eca0fbfcb2159a5a78acd\": container with ID starting with 832ae6b4a689235e2e1a8930d72538fd8d2e1bcf385eca0fbfcb2159a5a78acd not found: ID does not exist" containerID="832ae6b4a689235e2e1a8930d72538fd8d2e1bcf385eca0fbfcb2159a5a78acd" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.086339 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"832ae6b4a689235e2e1a8930d72538fd8d2e1bcf385eca0fbfcb2159a5a78acd"} err="failed to get container status \"832ae6b4a689235e2e1a8930d72538fd8d2e1bcf385eca0fbfcb2159a5a78acd\": rpc error: code = NotFound desc = could not find container \"832ae6b4a689235e2e1a8930d72538fd8d2e1bcf385eca0fbfcb2159a5a78acd\": container with ID starting with 832ae6b4a689235e2e1a8930d72538fd8d2e1bcf385eca0fbfcb2159a5a78acd not found: ID does not exist" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.086361 4923 scope.go:117] "RemoveContainer" containerID="ae4b4e4f20f8bbbc294061eb7b4ce7e0e25fcbf71dc46de1f56194f88e69cf88" Mar 21 04:23:12 crc kubenswrapper[4923]: E0321 04:23:12.086710 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae4b4e4f20f8bbbc294061eb7b4ce7e0e25fcbf71dc46de1f56194f88e69cf88\": container with ID starting with ae4b4e4f20f8bbbc294061eb7b4ce7e0e25fcbf71dc46de1f56194f88e69cf88 not found: ID does not exist" containerID="ae4b4e4f20f8bbbc294061eb7b4ce7e0e25fcbf71dc46de1f56194f88e69cf88" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.086765 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae4b4e4f20f8bbbc294061eb7b4ce7e0e25fcbf71dc46de1f56194f88e69cf88"} err="failed to get container status \"ae4b4e4f20f8bbbc294061eb7b4ce7e0e25fcbf71dc46de1f56194f88e69cf88\": rpc error: code = NotFound desc = could not find container \"ae4b4e4f20f8bbbc294061eb7b4ce7e0e25fcbf71dc46de1f56194f88e69cf88\": container with ID starting with ae4b4e4f20f8bbbc294061eb7b4ce7e0e25fcbf71dc46de1f56194f88e69cf88 not found: ID does not exist" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.086795 4923 scope.go:117] "RemoveContainer" containerID="82a16abf8cc9ff812ad09d93671ed8ffd3d126a2db4bdf55ef0464ef39b507e2" Mar 21 04:23:12 crc kubenswrapper[4923]: E0321 04:23:12.087344 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82a16abf8cc9ff812ad09d93671ed8ffd3d126a2db4bdf55ef0464ef39b507e2\": container with ID starting with 82a16abf8cc9ff812ad09d93671ed8ffd3d126a2db4bdf55ef0464ef39b507e2 not found: ID does not exist" containerID="82a16abf8cc9ff812ad09d93671ed8ffd3d126a2db4bdf55ef0464ef39b507e2" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.087391 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82a16abf8cc9ff812ad09d93671ed8ffd3d126a2db4bdf55ef0464ef39b507e2"} err="failed to get container status \"82a16abf8cc9ff812ad09d93671ed8ffd3d126a2db4bdf55ef0464ef39b507e2\": rpc error: code = NotFound desc = could not find container \"82a16abf8cc9ff812ad09d93671ed8ffd3d126a2db4bdf55ef0464ef39b507e2\": container with ID starting with 82a16abf8cc9ff812ad09d93671ed8ffd3d126a2db4bdf55ef0464ef39b507e2 not found: ID does not exist" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.087428 4923 scope.go:117] "RemoveContainer" containerID="7abf8fd90cfbc646ecb4876e1c2c40036d7c3c5b7d2c26d9692662a4875b8bcd" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.100497 4923 scope.go:117] "RemoveContainer" containerID="f75b56df386d23181b95dd676fb15bfe29d3f5074710ffeba886812ab19a87f3" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.120972 4923 scope.go:117] "RemoveContainer" containerID="753417c5ee736e82cd9e5bd479968da5a5adeb965edb948c56b4a536080fe8cd" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.145706 4923 scope.go:117] "RemoveContainer" containerID="7abf8fd90cfbc646ecb4876e1c2c40036d7c3c5b7d2c26d9692662a4875b8bcd" Mar 21 04:23:12 crc kubenswrapper[4923]: E0321 04:23:12.146868 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7abf8fd90cfbc646ecb4876e1c2c40036d7c3c5b7d2c26d9692662a4875b8bcd\": container with ID starting with 7abf8fd90cfbc646ecb4876e1c2c40036d7c3c5b7d2c26d9692662a4875b8bcd not found: ID does not exist" containerID="7abf8fd90cfbc646ecb4876e1c2c40036d7c3c5b7d2c26d9692662a4875b8bcd" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.146894 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7abf8fd90cfbc646ecb4876e1c2c40036d7c3c5b7d2c26d9692662a4875b8bcd"} err="failed to get container status \"7abf8fd90cfbc646ecb4876e1c2c40036d7c3c5b7d2c26d9692662a4875b8bcd\": rpc error: code = NotFound desc = could not find container \"7abf8fd90cfbc646ecb4876e1c2c40036d7c3c5b7d2c26d9692662a4875b8bcd\": container with ID starting with 7abf8fd90cfbc646ecb4876e1c2c40036d7c3c5b7d2c26d9692662a4875b8bcd not found: ID does not exist" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.146935 4923 scope.go:117] "RemoveContainer" containerID="f75b56df386d23181b95dd676fb15bfe29d3f5074710ffeba886812ab19a87f3" Mar 21 04:23:12 crc kubenswrapper[4923]: E0321 04:23:12.147172 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f75b56df386d23181b95dd676fb15bfe29d3f5074710ffeba886812ab19a87f3\": container with ID starting with f75b56df386d23181b95dd676fb15bfe29d3f5074710ffeba886812ab19a87f3 not found: ID does not exist" containerID="f75b56df386d23181b95dd676fb15bfe29d3f5074710ffeba886812ab19a87f3" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.147195 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f75b56df386d23181b95dd676fb15bfe29d3f5074710ffeba886812ab19a87f3"} err="failed to get container status \"f75b56df386d23181b95dd676fb15bfe29d3f5074710ffeba886812ab19a87f3\": rpc error: code = NotFound desc = could not find container \"f75b56df386d23181b95dd676fb15bfe29d3f5074710ffeba886812ab19a87f3\": container with ID starting with f75b56df386d23181b95dd676fb15bfe29d3f5074710ffeba886812ab19a87f3 not found: ID does not exist" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.147209 4923 scope.go:117] "RemoveContainer" containerID="753417c5ee736e82cd9e5bd479968da5a5adeb965edb948c56b4a536080fe8cd" Mar 21 04:23:12 crc kubenswrapper[4923]: E0321 04:23:12.147640 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"753417c5ee736e82cd9e5bd479968da5a5adeb965edb948c56b4a536080fe8cd\": container with ID starting with 753417c5ee736e82cd9e5bd479968da5a5adeb965edb948c56b4a536080fe8cd not found: ID does not exist" containerID="753417c5ee736e82cd9e5bd479968da5a5adeb965edb948c56b4a536080fe8cd" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.147682 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"753417c5ee736e82cd9e5bd479968da5a5adeb965edb948c56b4a536080fe8cd"} err="failed to get container status \"753417c5ee736e82cd9e5bd479968da5a5adeb965edb948c56b4a536080fe8cd\": rpc error: code = NotFound desc = could not find container \"753417c5ee736e82cd9e5bd479968da5a5adeb965edb948c56b4a536080fe8cd\": container with ID starting with 753417c5ee736e82cd9e5bd479968da5a5adeb965edb948c56b4a536080fe8cd not found: ID does not exist" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.147726 4923 scope.go:117] "RemoveContainer" containerID="eb28bff817234d4421e9a8e6a6071fdcf9037af713796c7c75118de6cfe81d4c" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.157586 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ded4d513-cc92-405c-8009-913f9aa7ea5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ded4d513-cc92-405c-8009-913f9aa7ea5f" (UID: "ded4d513-cc92-405c-8009-913f9aa7ea5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.180185 4923 scope.go:117] "RemoveContainer" containerID="eb28bff817234d4421e9a8e6a6071fdcf9037af713796c7c75118de6cfe81d4c" Mar 21 04:23:12 crc kubenswrapper[4923]: E0321 04:23:12.180614 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb28bff817234d4421e9a8e6a6071fdcf9037af713796c7c75118de6cfe81d4c\": container with ID starting with eb28bff817234d4421e9a8e6a6071fdcf9037af713796c7c75118de6cfe81d4c not found: ID does not exist" containerID="eb28bff817234d4421e9a8e6a6071fdcf9037af713796c7c75118de6cfe81d4c" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.180650 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb28bff817234d4421e9a8e6a6071fdcf9037af713796c7c75118de6cfe81d4c"} err="failed to get container status \"eb28bff817234d4421e9a8e6a6071fdcf9037af713796c7c75118de6cfe81d4c\": rpc error: code = NotFound desc = could not find container \"eb28bff817234d4421e9a8e6a6071fdcf9037af713796c7c75118de6cfe81d4c\": container with ID starting with eb28bff817234d4421e9a8e6a6071fdcf9037af713796c7c75118de6cfe81d4c not found: ID does not exist" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.180678 4923 scope.go:117] "RemoveContainer" containerID="4450280769cf12c05c407f2ca63cd72407151373d7c5a736fe690d61646ef607" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.192267 4923 scope.go:117] "RemoveContainer" containerID="2c1bb8bb8d78304006acade734b21fe50a436b3a08c52418ce58f0bd02d6c49b" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.203612 4923 scope.go:117] "RemoveContainer" containerID="6a6e3fba6a0a96168089d35c06acf1d08ab6f417303ad15fae3028d6433f3f1d" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.216671 4923 scope.go:117] "RemoveContainer" containerID="4450280769cf12c05c407f2ca63cd72407151373d7c5a736fe690d61646ef607" Mar 21 04:23:12 crc kubenswrapper[4923]: E0321 04:23:12.221816 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4450280769cf12c05c407f2ca63cd72407151373d7c5a736fe690d61646ef607\": container with ID starting with 4450280769cf12c05c407f2ca63cd72407151373d7c5a736fe690d61646ef607 not found: ID does not exist" containerID="4450280769cf12c05c407f2ca63cd72407151373d7c5a736fe690d61646ef607" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.221853 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4450280769cf12c05c407f2ca63cd72407151373d7c5a736fe690d61646ef607"} err="failed to get container status \"4450280769cf12c05c407f2ca63cd72407151373d7c5a736fe690d61646ef607\": rpc error: code = NotFound desc = could not find container \"4450280769cf12c05c407f2ca63cd72407151373d7c5a736fe690d61646ef607\": container with ID starting with 4450280769cf12c05c407f2ca63cd72407151373d7c5a736fe690d61646ef607 not found: ID does not exist" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.221880 4923 scope.go:117] "RemoveContainer" containerID="2c1bb8bb8d78304006acade734b21fe50a436b3a08c52418ce58f0bd02d6c49b" Mar 21 04:23:12 crc kubenswrapper[4923]: E0321 04:23:12.222180 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c1bb8bb8d78304006acade734b21fe50a436b3a08c52418ce58f0bd02d6c49b\": container with ID starting with 2c1bb8bb8d78304006acade734b21fe50a436b3a08c52418ce58f0bd02d6c49b not found: ID does not exist" containerID="2c1bb8bb8d78304006acade734b21fe50a436b3a08c52418ce58f0bd02d6c49b" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.222267 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c1bb8bb8d78304006acade734b21fe50a436b3a08c52418ce58f0bd02d6c49b"} err="failed to get container status \"2c1bb8bb8d78304006acade734b21fe50a436b3a08c52418ce58f0bd02d6c49b\": rpc error: code = NotFound desc = could not find container \"2c1bb8bb8d78304006acade734b21fe50a436b3a08c52418ce58f0bd02d6c49b\": container with ID starting with 2c1bb8bb8d78304006acade734b21fe50a436b3a08c52418ce58f0bd02d6c49b not found: ID does not exist" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.222346 4923 scope.go:117] "RemoveContainer" containerID="6a6e3fba6a0a96168089d35c06acf1d08ab6f417303ad15fae3028d6433f3f1d" Mar 21 04:23:12 crc kubenswrapper[4923]: E0321 04:23:12.222880 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a6e3fba6a0a96168089d35c06acf1d08ab6f417303ad15fae3028d6433f3f1d\": container with ID starting with 6a6e3fba6a0a96168089d35c06acf1d08ab6f417303ad15fae3028d6433f3f1d not found: ID does not exist" containerID="6a6e3fba6a0a96168089d35c06acf1d08ab6f417303ad15fae3028d6433f3f1d" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.222932 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a6e3fba6a0a96168089d35c06acf1d08ab6f417303ad15fae3028d6433f3f1d"} err="failed to get container status \"6a6e3fba6a0a96168089d35c06acf1d08ab6f417303ad15fae3028d6433f3f1d\": rpc error: code = NotFound desc = could not find container \"6a6e3fba6a0a96168089d35c06acf1d08ab6f417303ad15fae3028d6433f3f1d\": container with ID starting with 6a6e3fba6a0a96168089d35c06acf1d08ab6f417303ad15fae3028d6433f3f1d not found: ID does not exist" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.237128 4923 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ded4d513-cc92-405c-8009-913f9aa7ea5f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.244113 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9grdl"] Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.247693 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9grdl"] Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.275839 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lcp5h"] Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.279563 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lcp5h"] Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.305071 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nj2n7"] Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.367803 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39dc2e68-1df7-426f-aa50-15a542f6995b" path="/var/lib/kubelet/pods/39dc2e68-1df7-426f-aa50-15a542f6995b/volumes" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.368549 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b1d4e5a-6b46-4203-ba69-6440844e48ad" path="/var/lib/kubelet/pods/3b1d4e5a-6b46-4203-ba69-6440844e48ad/volumes" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.369456 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dd7868c-7d3e-43de-b0f9-1ab51280fce5" path="/var/lib/kubelet/pods/3dd7868c-7d3e-43de-b0f9-1ab51280fce5/volumes" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.370588 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd250302-91b0-41c4-b138-89559a78d375" path="/var/lib/kubelet/pods/dd250302-91b0-41c4-b138-89559a78d375/volumes" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.371216 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ded4d513-cc92-405c-8009-913f9aa7ea5f" path="/var/lib/kubelet/pods/ded4d513-cc92-405c-8009-913f9aa7ea5f/volumes" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.896552 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9rmdl"] Mar 21 04:23:12 crc kubenswrapper[4923]: E0321 04:23:12.896750 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dd7868c-7d3e-43de-b0f9-1ab51280fce5" containerName="registry-server" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.896760 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dd7868c-7d3e-43de-b0f9-1ab51280fce5" containerName="registry-server" Mar 21 04:23:12 crc kubenswrapper[4923]: E0321 04:23:12.896768 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b1d4e5a-6b46-4203-ba69-6440844e48ad" containerName="extract-content" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.896774 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b1d4e5a-6b46-4203-ba69-6440844e48ad" containerName="extract-content" Mar 21 04:23:12 crc kubenswrapper[4923]: E0321 04:23:12.896784 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dd7868c-7d3e-43de-b0f9-1ab51280fce5" containerName="extract-content" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.896790 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dd7868c-7d3e-43de-b0f9-1ab51280fce5" containerName="extract-content" Mar 21 04:23:12 crc kubenswrapper[4923]: E0321 04:23:12.896799 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd250302-91b0-41c4-b138-89559a78d375" containerName="registry-server" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.896806 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd250302-91b0-41c4-b138-89559a78d375" containerName="registry-server" Mar 21 04:23:12 crc kubenswrapper[4923]: E0321 04:23:12.896814 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39dc2e68-1df7-426f-aa50-15a542f6995b" containerName="marketplace-operator" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.896820 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="39dc2e68-1df7-426f-aa50-15a542f6995b" containerName="marketplace-operator" Mar 21 04:23:12 crc kubenswrapper[4923]: E0321 04:23:12.896826 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b1d4e5a-6b46-4203-ba69-6440844e48ad" containerName="registry-server" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.896832 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b1d4e5a-6b46-4203-ba69-6440844e48ad" containerName="registry-server" Mar 21 04:23:12 crc kubenswrapper[4923]: E0321 04:23:12.896841 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b1d4e5a-6b46-4203-ba69-6440844e48ad" containerName="extract-utilities" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.896847 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b1d4e5a-6b46-4203-ba69-6440844e48ad" containerName="extract-utilities" Mar 21 04:23:12 crc kubenswrapper[4923]: E0321 04:23:12.896856 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dd7868c-7d3e-43de-b0f9-1ab51280fce5" containerName="extract-utilities" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.896862 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dd7868c-7d3e-43de-b0f9-1ab51280fce5" containerName="extract-utilities" Mar 21 04:23:12 crc kubenswrapper[4923]: E0321 04:23:12.896869 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ded4d513-cc92-405c-8009-913f9aa7ea5f" containerName="extract-content" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.896875 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="ded4d513-cc92-405c-8009-913f9aa7ea5f" containerName="extract-content" Mar 21 04:23:12 crc kubenswrapper[4923]: E0321 04:23:12.896883 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ded4d513-cc92-405c-8009-913f9aa7ea5f" containerName="extract-utilities" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.896888 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="ded4d513-cc92-405c-8009-913f9aa7ea5f" containerName="extract-utilities" Mar 21 04:23:12 crc kubenswrapper[4923]: E0321 04:23:12.896898 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd250302-91b0-41c4-b138-89559a78d375" containerName="extract-utilities" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.896903 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd250302-91b0-41c4-b138-89559a78d375" containerName="extract-utilities" Mar 21 04:23:12 crc kubenswrapper[4923]: E0321 04:23:12.896909 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ded4d513-cc92-405c-8009-913f9aa7ea5f" containerName="registry-server" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.896915 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="ded4d513-cc92-405c-8009-913f9aa7ea5f" containerName="registry-server" Mar 21 04:23:12 crc kubenswrapper[4923]: E0321 04:23:12.896925 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd250302-91b0-41c4-b138-89559a78d375" containerName="extract-content" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.896930 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd250302-91b0-41c4-b138-89559a78d375" containerName="extract-content" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.897007 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dd7868c-7d3e-43de-b0f9-1ab51280fce5" containerName="registry-server" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.897018 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="ded4d513-cc92-405c-8009-913f9aa7ea5f" containerName="registry-server" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.897027 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b1d4e5a-6b46-4203-ba69-6440844e48ad" containerName="registry-server" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.897037 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="39dc2e68-1df7-426f-aa50-15a542f6995b" containerName="marketplace-operator" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.897046 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd250302-91b0-41c4-b138-89559a78d375" containerName="registry-server" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.897778 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9rmdl" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.900130 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.911072 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9rmdl"] Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.947146 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9c3cd1d-3b39-4990-9cfc-bbadb41f837e-catalog-content\") pod \"redhat-marketplace-9rmdl\" (UID: \"c9c3cd1d-3b39-4990-9cfc-bbadb41f837e\") " pod="openshift-marketplace/redhat-marketplace-9rmdl" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.947233 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9c3cd1d-3b39-4990-9cfc-bbadb41f837e-utilities\") pod \"redhat-marketplace-9rmdl\" (UID: \"c9c3cd1d-3b39-4990-9cfc-bbadb41f837e\") " pod="openshift-marketplace/redhat-marketplace-9rmdl" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.947315 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66l5j\" (UniqueName: \"kubernetes.io/projected/c9c3cd1d-3b39-4990-9cfc-bbadb41f837e-kube-api-access-66l5j\") pod \"redhat-marketplace-9rmdl\" (UID: \"c9c3cd1d-3b39-4990-9cfc-bbadb41f837e\") " pod="openshift-marketplace/redhat-marketplace-9rmdl" Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.968056 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nj2n7" event={"ID":"bba19ab1-fbf2-4a6f-a481-45e06896f9cd","Type":"ContainerStarted","Data":"fdba36b23ecb77e8b1734e449e1774bf8da93209d73eeb19beae0b166a255609"} Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.968103 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nj2n7" event={"ID":"bba19ab1-fbf2-4a6f-a481-45e06896f9cd","Type":"ContainerStarted","Data":"f0c961ca90c9dacc8903066e5c85de02b51e754a3dcc989c63501aec2072a382"} Mar 21 04:23:12 crc kubenswrapper[4923]: I0321 04:23:12.990945 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-nj2n7" podStartSLOduration=1.990900797 podStartE2EDuration="1.990900797s" podCreationTimestamp="2026-03-21 04:23:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:23:12.984468909 +0000 UTC m=+358.137479996" watchObservedRunningTime="2026-03-21 04:23:12.990900797 +0000 UTC m=+358.143911884" Mar 21 04:23:13 crc kubenswrapper[4923]: I0321 04:23:13.048568 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9c3cd1d-3b39-4990-9cfc-bbadb41f837e-catalog-content\") pod \"redhat-marketplace-9rmdl\" (UID: \"c9c3cd1d-3b39-4990-9cfc-bbadb41f837e\") " pod="openshift-marketplace/redhat-marketplace-9rmdl" Mar 21 04:23:13 crc kubenswrapper[4923]: I0321 04:23:13.048620 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9c3cd1d-3b39-4990-9cfc-bbadb41f837e-utilities\") pod \"redhat-marketplace-9rmdl\" (UID: \"c9c3cd1d-3b39-4990-9cfc-bbadb41f837e\") " pod="openshift-marketplace/redhat-marketplace-9rmdl" Mar 21 04:23:13 crc kubenswrapper[4923]: I0321 04:23:13.048662 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66l5j\" (UniqueName: \"kubernetes.io/projected/c9c3cd1d-3b39-4990-9cfc-bbadb41f837e-kube-api-access-66l5j\") pod \"redhat-marketplace-9rmdl\" (UID: \"c9c3cd1d-3b39-4990-9cfc-bbadb41f837e\") " pod="openshift-marketplace/redhat-marketplace-9rmdl" Mar 21 04:23:13 crc kubenswrapper[4923]: I0321 04:23:13.049320 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9c3cd1d-3b39-4990-9cfc-bbadb41f837e-catalog-content\") pod \"redhat-marketplace-9rmdl\" (UID: \"c9c3cd1d-3b39-4990-9cfc-bbadb41f837e\") " pod="openshift-marketplace/redhat-marketplace-9rmdl" Mar 21 04:23:13 crc kubenswrapper[4923]: I0321 04:23:13.049443 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9c3cd1d-3b39-4990-9cfc-bbadb41f837e-utilities\") pod \"redhat-marketplace-9rmdl\" (UID: \"c9c3cd1d-3b39-4990-9cfc-bbadb41f837e\") " pod="openshift-marketplace/redhat-marketplace-9rmdl" Mar 21 04:23:13 crc kubenswrapper[4923]: I0321 04:23:13.076260 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66l5j\" (UniqueName: \"kubernetes.io/projected/c9c3cd1d-3b39-4990-9cfc-bbadb41f837e-kube-api-access-66l5j\") pod \"redhat-marketplace-9rmdl\" (UID: \"c9c3cd1d-3b39-4990-9cfc-bbadb41f837e\") " pod="openshift-marketplace/redhat-marketplace-9rmdl" Mar 21 04:23:13 crc kubenswrapper[4923]: I0321 04:23:13.241201 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9rmdl" Mar 21 04:23:13 crc kubenswrapper[4923]: I0321 04:23:13.458482 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9rmdl"] Mar 21 04:23:13 crc kubenswrapper[4923]: W0321 04:23:13.465457 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9c3cd1d_3b39_4990_9cfc_bbadb41f837e.slice/crio-16f9f07924c532d69ecbc3ea6b9b21bf425702b4f1761de949caec5fba93c7bd WatchSource:0}: Error finding container 16f9f07924c532d69ecbc3ea6b9b21bf425702b4f1761de949caec5fba93c7bd: Status 404 returned error can't find the container with id 16f9f07924c532d69ecbc3ea6b9b21bf425702b4f1761de949caec5fba93c7bd Mar 21 04:23:13 crc kubenswrapper[4923]: I0321 04:23:13.897786 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2s8wr"] Mar 21 04:23:13 crc kubenswrapper[4923]: I0321 04:23:13.898714 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2s8wr" Mar 21 04:23:13 crc kubenswrapper[4923]: I0321 04:23:13.903694 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 21 04:23:13 crc kubenswrapper[4923]: I0321 04:23:13.913554 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2s8wr"] Mar 21 04:23:13 crc kubenswrapper[4923]: I0321 04:23:13.962443 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d20826ac-d354-4e18-ba8e-affcf49ed187-utilities\") pod \"redhat-operators-2s8wr\" (UID: \"d20826ac-d354-4e18-ba8e-affcf49ed187\") " pod="openshift-marketplace/redhat-operators-2s8wr" Mar 21 04:23:13 crc kubenswrapper[4923]: I0321 04:23:13.962505 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d20826ac-d354-4e18-ba8e-affcf49ed187-catalog-content\") pod \"redhat-operators-2s8wr\" (UID: \"d20826ac-d354-4e18-ba8e-affcf49ed187\") " pod="openshift-marketplace/redhat-operators-2s8wr" Mar 21 04:23:13 crc kubenswrapper[4923]: I0321 04:23:13.962611 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nvpd\" (UniqueName: \"kubernetes.io/projected/d20826ac-d354-4e18-ba8e-affcf49ed187-kube-api-access-8nvpd\") pod \"redhat-operators-2s8wr\" (UID: \"d20826ac-d354-4e18-ba8e-affcf49ed187\") " pod="openshift-marketplace/redhat-operators-2s8wr" Mar 21 04:23:13 crc kubenswrapper[4923]: I0321 04:23:13.977466 4923 generic.go:334] "Generic (PLEG): container finished" podID="c9c3cd1d-3b39-4990-9cfc-bbadb41f837e" containerID="aadd2cab651fc00721cdd4323c783e97f58d6c84322cb1af8358515e81c5fffd" exitCode=0 Mar 21 04:23:13 crc kubenswrapper[4923]: I0321 04:23:13.978224 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rmdl" event={"ID":"c9c3cd1d-3b39-4990-9cfc-bbadb41f837e","Type":"ContainerDied","Data":"aadd2cab651fc00721cdd4323c783e97f58d6c84322cb1af8358515e81c5fffd"} Mar 21 04:23:13 crc kubenswrapper[4923]: I0321 04:23:13.978282 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-nj2n7" Mar 21 04:23:13 crc kubenswrapper[4923]: I0321 04:23:13.978302 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rmdl" event={"ID":"c9c3cd1d-3b39-4990-9cfc-bbadb41f837e","Type":"ContainerStarted","Data":"16f9f07924c532d69ecbc3ea6b9b21bf425702b4f1761de949caec5fba93c7bd"} Mar 21 04:23:13 crc kubenswrapper[4923]: I0321 04:23:13.980581 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-nj2n7" Mar 21 04:23:14 crc kubenswrapper[4923]: I0321 04:23:14.064550 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nvpd\" (UniqueName: \"kubernetes.io/projected/d20826ac-d354-4e18-ba8e-affcf49ed187-kube-api-access-8nvpd\") pod \"redhat-operators-2s8wr\" (UID: \"d20826ac-d354-4e18-ba8e-affcf49ed187\") " pod="openshift-marketplace/redhat-operators-2s8wr" Mar 21 04:23:14 crc kubenswrapper[4923]: I0321 04:23:14.064904 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d20826ac-d354-4e18-ba8e-affcf49ed187-utilities\") pod \"redhat-operators-2s8wr\" (UID: \"d20826ac-d354-4e18-ba8e-affcf49ed187\") " pod="openshift-marketplace/redhat-operators-2s8wr" Mar 21 04:23:14 crc kubenswrapper[4923]: I0321 04:23:14.065052 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d20826ac-d354-4e18-ba8e-affcf49ed187-catalog-content\") pod \"redhat-operators-2s8wr\" (UID: \"d20826ac-d354-4e18-ba8e-affcf49ed187\") " pod="openshift-marketplace/redhat-operators-2s8wr" Mar 21 04:23:14 crc kubenswrapper[4923]: I0321 04:23:14.065654 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d20826ac-d354-4e18-ba8e-affcf49ed187-catalog-content\") pod \"redhat-operators-2s8wr\" (UID: \"d20826ac-d354-4e18-ba8e-affcf49ed187\") " pod="openshift-marketplace/redhat-operators-2s8wr" Mar 21 04:23:14 crc kubenswrapper[4923]: I0321 04:23:14.065986 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d20826ac-d354-4e18-ba8e-affcf49ed187-utilities\") pod \"redhat-operators-2s8wr\" (UID: \"d20826ac-d354-4e18-ba8e-affcf49ed187\") " pod="openshift-marketplace/redhat-operators-2s8wr" Mar 21 04:23:14 crc kubenswrapper[4923]: I0321 04:23:14.091302 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nvpd\" (UniqueName: \"kubernetes.io/projected/d20826ac-d354-4e18-ba8e-affcf49ed187-kube-api-access-8nvpd\") pod \"redhat-operators-2s8wr\" (UID: \"d20826ac-d354-4e18-ba8e-affcf49ed187\") " pod="openshift-marketplace/redhat-operators-2s8wr" Mar 21 04:23:14 crc kubenswrapper[4923]: I0321 04:23:14.217388 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2s8wr" Mar 21 04:23:14 crc kubenswrapper[4923]: I0321 04:23:14.653540 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2s8wr"] Mar 21 04:23:14 crc kubenswrapper[4923]: W0321 04:23:14.660507 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd20826ac_d354_4e18_ba8e_affcf49ed187.slice/crio-eca1c107e38fe191e4516fa63b8539ca90522a35df9268ca8b79e8cff506b7c3 WatchSource:0}: Error finding container eca1c107e38fe191e4516fa63b8539ca90522a35df9268ca8b79e8cff506b7c3: Status 404 returned error can't find the container with id eca1c107e38fe191e4516fa63b8539ca90522a35df9268ca8b79e8cff506b7c3 Mar 21 04:23:14 crc kubenswrapper[4923]: I0321 04:23:14.988539 4923 generic.go:334] "Generic (PLEG): container finished" podID="d20826ac-d354-4e18-ba8e-affcf49ed187" containerID="b0956ed5b865db0f6b1e64f9a4adaafc3c92049c275eb19890320e3e2c17706b" exitCode=0 Mar 21 04:23:14 crc kubenswrapper[4923]: I0321 04:23:14.988743 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2s8wr" event={"ID":"d20826ac-d354-4e18-ba8e-affcf49ed187","Type":"ContainerDied","Data":"b0956ed5b865db0f6b1e64f9a4adaafc3c92049c275eb19890320e3e2c17706b"} Mar 21 04:23:14 crc kubenswrapper[4923]: I0321 04:23:14.989201 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2s8wr" event={"ID":"d20826ac-d354-4e18-ba8e-affcf49ed187","Type":"ContainerStarted","Data":"eca1c107e38fe191e4516fa63b8539ca90522a35df9268ca8b79e8cff506b7c3"} Mar 21 04:23:14 crc kubenswrapper[4923]: I0321 04:23:14.992196 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rmdl" event={"ID":"c9c3cd1d-3b39-4990-9cfc-bbadb41f837e","Type":"ContainerStarted","Data":"2860a8b567eb563232a5a45f718063d39a249bbadff69a86a2f24b22bd250ae6"} Mar 21 04:23:15 crc kubenswrapper[4923]: I0321 04:23:15.698760 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tcvnp"] Mar 21 04:23:15 crc kubenswrapper[4923]: I0321 04:23:15.699939 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tcvnp" Mar 21 04:23:15 crc kubenswrapper[4923]: I0321 04:23:15.704094 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 21 04:23:15 crc kubenswrapper[4923]: I0321 04:23:15.713371 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tcvnp"] Mar 21 04:23:15 crc kubenswrapper[4923]: I0321 04:23:15.787277 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00194538-9e59-4093-b0a7-be2801bcef80-catalog-content\") pod \"certified-operators-tcvnp\" (UID: \"00194538-9e59-4093-b0a7-be2801bcef80\") " pod="openshift-marketplace/certified-operators-tcvnp" Mar 21 04:23:15 crc kubenswrapper[4923]: I0321 04:23:15.787320 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00194538-9e59-4093-b0a7-be2801bcef80-utilities\") pod \"certified-operators-tcvnp\" (UID: \"00194538-9e59-4093-b0a7-be2801bcef80\") " pod="openshift-marketplace/certified-operators-tcvnp" Mar 21 04:23:15 crc kubenswrapper[4923]: I0321 04:23:15.787393 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhgmc\" (UniqueName: \"kubernetes.io/projected/00194538-9e59-4093-b0a7-be2801bcef80-kube-api-access-dhgmc\") pod \"certified-operators-tcvnp\" (UID: \"00194538-9e59-4093-b0a7-be2801bcef80\") " pod="openshift-marketplace/certified-operators-tcvnp" Mar 21 04:23:15 crc kubenswrapper[4923]: I0321 04:23:15.888739 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00194538-9e59-4093-b0a7-be2801bcef80-utilities\") pod \"certified-operators-tcvnp\" (UID: \"00194538-9e59-4093-b0a7-be2801bcef80\") " pod="openshift-marketplace/certified-operators-tcvnp" Mar 21 04:23:15 crc kubenswrapper[4923]: I0321 04:23:15.889084 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhgmc\" (UniqueName: \"kubernetes.io/projected/00194538-9e59-4093-b0a7-be2801bcef80-kube-api-access-dhgmc\") pod \"certified-operators-tcvnp\" (UID: \"00194538-9e59-4093-b0a7-be2801bcef80\") " pod="openshift-marketplace/certified-operators-tcvnp" Mar 21 04:23:15 crc kubenswrapper[4923]: I0321 04:23:15.889221 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00194538-9e59-4093-b0a7-be2801bcef80-catalog-content\") pod \"certified-operators-tcvnp\" (UID: \"00194538-9e59-4093-b0a7-be2801bcef80\") " pod="openshift-marketplace/certified-operators-tcvnp" Mar 21 04:23:15 crc kubenswrapper[4923]: I0321 04:23:15.889467 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00194538-9e59-4093-b0a7-be2801bcef80-utilities\") pod \"certified-operators-tcvnp\" (UID: \"00194538-9e59-4093-b0a7-be2801bcef80\") " pod="openshift-marketplace/certified-operators-tcvnp" Mar 21 04:23:15 crc kubenswrapper[4923]: I0321 04:23:15.889862 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00194538-9e59-4093-b0a7-be2801bcef80-catalog-content\") pod \"certified-operators-tcvnp\" (UID: \"00194538-9e59-4093-b0a7-be2801bcef80\") " pod="openshift-marketplace/certified-operators-tcvnp" Mar 21 04:23:15 crc kubenswrapper[4923]: I0321 04:23:15.913995 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhgmc\" (UniqueName: \"kubernetes.io/projected/00194538-9e59-4093-b0a7-be2801bcef80-kube-api-access-dhgmc\") pod \"certified-operators-tcvnp\" (UID: \"00194538-9e59-4093-b0a7-be2801bcef80\") " pod="openshift-marketplace/certified-operators-tcvnp" Mar 21 04:23:15 crc kubenswrapper[4923]: I0321 04:23:15.999150 4923 generic.go:334] "Generic (PLEG): container finished" podID="c9c3cd1d-3b39-4990-9cfc-bbadb41f837e" containerID="2860a8b567eb563232a5a45f718063d39a249bbadff69a86a2f24b22bd250ae6" exitCode=0 Mar 21 04:23:15 crc kubenswrapper[4923]: I0321 04:23:15.999493 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rmdl" event={"ID":"c9c3cd1d-3b39-4990-9cfc-bbadb41f837e","Type":"ContainerDied","Data":"2860a8b567eb563232a5a45f718063d39a249bbadff69a86a2f24b22bd250ae6"} Mar 21 04:23:16 crc kubenswrapper[4923]: I0321 04:23:16.003930 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2s8wr" event={"ID":"d20826ac-d354-4e18-ba8e-affcf49ed187","Type":"ContainerStarted","Data":"df0513ccf3ff04646194b2b0703db2e3c428df9f127f7d6e32b17581430187c1"} Mar 21 04:23:16 crc kubenswrapper[4923]: I0321 04:23:16.021204 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tcvnp" Mar 21 04:23:16 crc kubenswrapper[4923]: I0321 04:23:16.211806 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tcvnp"] Mar 21 04:23:16 crc kubenswrapper[4923]: I0321 04:23:16.298402 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ckfp6"] Mar 21 04:23:16 crc kubenswrapper[4923]: I0321 04:23:16.321599 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ckfp6"] Mar 21 04:23:16 crc kubenswrapper[4923]: I0321 04:23:16.321807 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ckfp6" Mar 21 04:23:16 crc kubenswrapper[4923]: I0321 04:23:16.325263 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 21 04:23:16 crc kubenswrapper[4923]: I0321 04:23:16.396950 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8pv4\" (UniqueName: \"kubernetes.io/projected/db0b017b-07ab-4c9a-b9ae-2111b970e1fe-kube-api-access-h8pv4\") pod \"community-operators-ckfp6\" (UID: \"db0b017b-07ab-4c9a-b9ae-2111b970e1fe\") " pod="openshift-marketplace/community-operators-ckfp6" Mar 21 04:23:16 crc kubenswrapper[4923]: I0321 04:23:16.397394 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db0b017b-07ab-4c9a-b9ae-2111b970e1fe-utilities\") pod \"community-operators-ckfp6\" (UID: \"db0b017b-07ab-4c9a-b9ae-2111b970e1fe\") " pod="openshift-marketplace/community-operators-ckfp6" Mar 21 04:23:16 crc kubenswrapper[4923]: I0321 04:23:16.397447 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db0b017b-07ab-4c9a-b9ae-2111b970e1fe-catalog-content\") pod \"community-operators-ckfp6\" (UID: \"db0b017b-07ab-4c9a-b9ae-2111b970e1fe\") " pod="openshift-marketplace/community-operators-ckfp6" Mar 21 04:23:16 crc kubenswrapper[4923]: I0321 04:23:16.498607 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8pv4\" (UniqueName: \"kubernetes.io/projected/db0b017b-07ab-4c9a-b9ae-2111b970e1fe-kube-api-access-h8pv4\") pod \"community-operators-ckfp6\" (UID: \"db0b017b-07ab-4c9a-b9ae-2111b970e1fe\") " pod="openshift-marketplace/community-operators-ckfp6" Mar 21 04:23:16 crc kubenswrapper[4923]: I0321 04:23:16.498784 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db0b017b-07ab-4c9a-b9ae-2111b970e1fe-utilities\") pod \"community-operators-ckfp6\" (UID: \"db0b017b-07ab-4c9a-b9ae-2111b970e1fe\") " pod="openshift-marketplace/community-operators-ckfp6" Mar 21 04:23:16 crc kubenswrapper[4923]: I0321 04:23:16.498870 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db0b017b-07ab-4c9a-b9ae-2111b970e1fe-catalog-content\") pod \"community-operators-ckfp6\" (UID: \"db0b017b-07ab-4c9a-b9ae-2111b970e1fe\") " pod="openshift-marketplace/community-operators-ckfp6" Mar 21 04:23:16 crc kubenswrapper[4923]: I0321 04:23:16.499367 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db0b017b-07ab-4c9a-b9ae-2111b970e1fe-utilities\") pod \"community-operators-ckfp6\" (UID: \"db0b017b-07ab-4c9a-b9ae-2111b970e1fe\") " pod="openshift-marketplace/community-operators-ckfp6" Mar 21 04:23:16 crc kubenswrapper[4923]: I0321 04:23:16.499384 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db0b017b-07ab-4c9a-b9ae-2111b970e1fe-catalog-content\") pod \"community-operators-ckfp6\" (UID: \"db0b017b-07ab-4c9a-b9ae-2111b970e1fe\") " pod="openshift-marketplace/community-operators-ckfp6" Mar 21 04:23:16 crc kubenswrapper[4923]: I0321 04:23:16.522011 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8pv4\" (UniqueName: \"kubernetes.io/projected/db0b017b-07ab-4c9a-b9ae-2111b970e1fe-kube-api-access-h8pv4\") pod \"community-operators-ckfp6\" (UID: \"db0b017b-07ab-4c9a-b9ae-2111b970e1fe\") " pod="openshift-marketplace/community-operators-ckfp6" Mar 21 04:23:16 crc kubenswrapper[4923]: I0321 04:23:16.642245 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ckfp6" Mar 21 04:23:16 crc kubenswrapper[4923]: I0321 04:23:16.918863 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ckfp6"] Mar 21 04:23:16 crc kubenswrapper[4923]: W0321 04:23:16.927434 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb0b017b_07ab_4c9a_b9ae_2111b970e1fe.slice/crio-eb72f59d07f1dea37a5becbdc849975bedc3bb63e1d1a9fdd72003fcf20ccbf9 WatchSource:0}: Error finding container eb72f59d07f1dea37a5becbdc849975bedc3bb63e1d1a9fdd72003fcf20ccbf9: Status 404 returned error can't find the container with id eb72f59d07f1dea37a5becbdc849975bedc3bb63e1d1a9fdd72003fcf20ccbf9 Mar 21 04:23:17 crc kubenswrapper[4923]: I0321 04:23:17.012778 4923 generic.go:334] "Generic (PLEG): container finished" podID="00194538-9e59-4093-b0a7-be2801bcef80" containerID="b41a8969d6173a1214c3fec612288c2b1b38584e4eb1245e59fece0cc01299fd" exitCode=0 Mar 21 04:23:17 crc kubenswrapper[4923]: I0321 04:23:17.012835 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcvnp" event={"ID":"00194538-9e59-4093-b0a7-be2801bcef80","Type":"ContainerDied","Data":"b41a8969d6173a1214c3fec612288c2b1b38584e4eb1245e59fece0cc01299fd"} Mar 21 04:23:17 crc kubenswrapper[4923]: I0321 04:23:17.012915 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcvnp" event={"ID":"00194538-9e59-4093-b0a7-be2801bcef80","Type":"ContainerStarted","Data":"c9b641a430d5d6d9649a4db7ce20439692ad21c567903540383269c1f8cfc2a9"} Mar 21 04:23:17 crc kubenswrapper[4923]: I0321 04:23:17.023437 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9rmdl" event={"ID":"c9c3cd1d-3b39-4990-9cfc-bbadb41f837e","Type":"ContainerStarted","Data":"a0cc79a9b5896a44bb002f9ca0c13049d1624ba3c7a5ddbd278baccb3e01a3f3"} Mar 21 04:23:17 crc kubenswrapper[4923]: I0321 04:23:17.027077 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckfp6" event={"ID":"db0b017b-07ab-4c9a-b9ae-2111b970e1fe","Type":"ContainerStarted","Data":"eb72f59d07f1dea37a5becbdc849975bedc3bb63e1d1a9fdd72003fcf20ccbf9"} Mar 21 04:23:17 crc kubenswrapper[4923]: I0321 04:23:17.035910 4923 generic.go:334] "Generic (PLEG): container finished" podID="d20826ac-d354-4e18-ba8e-affcf49ed187" containerID="df0513ccf3ff04646194b2b0703db2e3c428df9f127f7d6e32b17581430187c1" exitCode=0 Mar 21 04:23:17 crc kubenswrapper[4923]: I0321 04:23:17.035979 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2s8wr" event={"ID":"d20826ac-d354-4e18-ba8e-affcf49ed187","Type":"ContainerDied","Data":"df0513ccf3ff04646194b2b0703db2e3c428df9f127f7d6e32b17581430187c1"} Mar 21 04:23:17 crc kubenswrapper[4923]: I0321 04:23:17.057634 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9rmdl" podStartSLOduration=2.600364313 podStartE2EDuration="5.057613929s" podCreationTimestamp="2026-03-21 04:23:12 +0000 UTC" firstStartedPulling="2026-03-21 04:23:13.979015692 +0000 UTC m=+359.132026779" lastFinishedPulling="2026-03-21 04:23:16.436265308 +0000 UTC m=+361.589276395" observedRunningTime="2026-03-21 04:23:17.054247066 +0000 UTC m=+362.207258163" watchObservedRunningTime="2026-03-21 04:23:17.057613929 +0000 UTC m=+362.210625026" Mar 21 04:23:18 crc kubenswrapper[4923]: I0321 04:23:18.045625 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2s8wr" event={"ID":"d20826ac-d354-4e18-ba8e-affcf49ed187","Type":"ContainerStarted","Data":"c6559e764f4d01c27337a385dbf5afd8504603f4ac998a07664ba043f62307d6"} Mar 21 04:23:18 crc kubenswrapper[4923]: I0321 04:23:18.050892 4923 generic.go:334] "Generic (PLEG): container finished" podID="00194538-9e59-4093-b0a7-be2801bcef80" containerID="6146652846bd9436622d8f136b6d01a157cfb369daea5783cb8ad661b2357194" exitCode=0 Mar 21 04:23:18 crc kubenswrapper[4923]: I0321 04:23:18.050961 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcvnp" event={"ID":"00194538-9e59-4093-b0a7-be2801bcef80","Type":"ContainerDied","Data":"6146652846bd9436622d8f136b6d01a157cfb369daea5783cb8ad661b2357194"} Mar 21 04:23:18 crc kubenswrapper[4923]: I0321 04:23:18.052914 4923 generic.go:334] "Generic (PLEG): container finished" podID="db0b017b-07ab-4c9a-b9ae-2111b970e1fe" containerID="c01de800e65d9965443af55394384fc45bd0db5f675abf7849fbaa37a5579448" exitCode=0 Mar 21 04:23:18 crc kubenswrapper[4923]: I0321 04:23:18.053025 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckfp6" event={"ID":"db0b017b-07ab-4c9a-b9ae-2111b970e1fe","Type":"ContainerDied","Data":"c01de800e65d9965443af55394384fc45bd0db5f675abf7849fbaa37a5579448"} Mar 21 04:23:18 crc kubenswrapper[4923]: I0321 04:23:18.072798 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2s8wr" podStartSLOduration=2.395533933 podStartE2EDuration="5.072779659s" podCreationTimestamp="2026-03-21 04:23:13 +0000 UTC" firstStartedPulling="2026-03-21 04:23:14.993335806 +0000 UTC m=+360.146346893" lastFinishedPulling="2026-03-21 04:23:17.670581492 +0000 UTC m=+362.823592619" observedRunningTime="2026-03-21 04:23:18.068387884 +0000 UTC m=+363.221398981" watchObservedRunningTime="2026-03-21 04:23:18.072779659 +0000 UTC m=+363.225790746" Mar 21 04:23:19 crc kubenswrapper[4923]: I0321 04:23:19.062585 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckfp6" event={"ID":"db0b017b-07ab-4c9a-b9ae-2111b970e1fe","Type":"ContainerStarted","Data":"778b5cfea9c6cefbad3dea8d093636b9bf7d2ef3534e03e69d967df7a3f07b8f"} Mar 21 04:23:19 crc kubenswrapper[4923]: I0321 04:23:19.065517 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcvnp" event={"ID":"00194538-9e59-4093-b0a7-be2801bcef80","Type":"ContainerStarted","Data":"c940d2cd594ffd558f6e03c99eb1027e65cb2636c6d6979b7a1f3bbcba77c514"} Mar 21 04:23:19 crc kubenswrapper[4923]: I0321 04:23:19.111830 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tcvnp" podStartSLOduration=2.681927032 podStartE2EDuration="4.111790523s" podCreationTimestamp="2026-03-21 04:23:15 +0000 UTC" firstStartedPulling="2026-03-21 04:23:17.016979207 +0000 UTC m=+362.169990294" lastFinishedPulling="2026-03-21 04:23:18.446842698 +0000 UTC m=+363.599853785" observedRunningTime="2026-03-21 04:23:19.107226982 +0000 UTC m=+364.260238079" watchObservedRunningTime="2026-03-21 04:23:19.111790523 +0000 UTC m=+364.264801610" Mar 21 04:23:20 crc kubenswrapper[4923]: I0321 04:23:20.071562 4923 generic.go:334] "Generic (PLEG): container finished" podID="db0b017b-07ab-4c9a-b9ae-2111b970e1fe" containerID="778b5cfea9c6cefbad3dea8d093636b9bf7d2ef3534e03e69d967df7a3f07b8f" exitCode=0 Mar 21 04:23:20 crc kubenswrapper[4923]: I0321 04:23:20.072806 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckfp6" event={"ID":"db0b017b-07ab-4c9a-b9ae-2111b970e1fe","Type":"ContainerDied","Data":"778b5cfea9c6cefbad3dea8d093636b9bf7d2ef3534e03e69d967df7a3f07b8f"} Mar 21 04:23:21 crc kubenswrapper[4923]: I0321 04:23:21.081242 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckfp6" event={"ID":"db0b017b-07ab-4c9a-b9ae-2111b970e1fe","Type":"ContainerStarted","Data":"a9f702228aae1f432b047ae5322e6968593b1ddaf44a53998b04685f9a7e2c89"} Mar 21 04:23:21 crc kubenswrapper[4923]: I0321 04:23:21.107815 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ckfp6" podStartSLOduration=2.6028638969999998 podStartE2EDuration="5.107785902s" podCreationTimestamp="2026-03-21 04:23:16 +0000 UTC" firstStartedPulling="2026-03-21 04:23:18.054521006 +0000 UTC m=+363.207532093" lastFinishedPulling="2026-03-21 04:23:20.559443001 +0000 UTC m=+365.712454098" observedRunningTime="2026-03-21 04:23:21.104588884 +0000 UTC m=+366.257600041" watchObservedRunningTime="2026-03-21 04:23:21.107785902 +0000 UTC m=+366.260797029" Mar 21 04:23:23 crc kubenswrapper[4923]: I0321 04:23:23.241942 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9rmdl" Mar 21 04:23:23 crc kubenswrapper[4923]: I0321 04:23:23.242346 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9rmdl" Mar 21 04:23:23 crc kubenswrapper[4923]: I0321 04:23:23.309808 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9rmdl" Mar 21 04:23:24 crc kubenswrapper[4923]: I0321 04:23:24.160052 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9rmdl" Mar 21 04:23:24 crc kubenswrapper[4923]: I0321 04:23:24.217897 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2s8wr" Mar 21 04:23:24 crc kubenswrapper[4923]: I0321 04:23:24.218001 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2s8wr" Mar 21 04:23:25 crc kubenswrapper[4923]: I0321 04:23:25.273901 4923 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2s8wr" podUID="d20826ac-d354-4e18-ba8e-affcf49ed187" containerName="registry-server" probeResult="failure" output=< Mar 21 04:23:25 crc kubenswrapper[4923]: timeout: failed to connect service ":50051" within 1s Mar 21 04:23:25 crc kubenswrapper[4923]: > Mar 21 04:23:26 crc kubenswrapper[4923]: I0321 04:23:26.022252 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tcvnp" Mar 21 04:23:26 crc kubenswrapper[4923]: I0321 04:23:26.022493 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tcvnp" Mar 21 04:23:26 crc kubenswrapper[4923]: I0321 04:23:26.077850 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tcvnp" Mar 21 04:23:26 crc kubenswrapper[4923]: I0321 04:23:26.167602 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tcvnp" Mar 21 04:23:26 crc kubenswrapper[4923]: I0321 04:23:26.642777 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ckfp6" Mar 21 04:23:26 crc kubenswrapper[4923]: I0321 04:23:26.642862 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ckfp6" Mar 21 04:23:26 crc kubenswrapper[4923]: I0321 04:23:26.711074 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ckfp6" Mar 21 04:23:27 crc kubenswrapper[4923]: I0321 04:23:27.190153 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ckfp6" Mar 21 04:23:33 crc kubenswrapper[4923]: I0321 04:23:33.288015 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" podUID="b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8" containerName="registry" containerID="cri-o://9c7a4d29bc51132aec4fc84826c2b44cbd5c77e441512a2010cf079c420558d8" gracePeriod=30 Mar 21 04:23:33 crc kubenswrapper[4923]: I0321 04:23:33.677947 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:23:33 crc kubenswrapper[4923]: I0321 04:23:33.767999 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " Mar 21 04:23:33 crc kubenswrapper[4923]: I0321 04:23:33.768414 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-bound-sa-token\") pod \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " Mar 21 04:23:33 crc kubenswrapper[4923]: I0321 04:23:33.768471 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmx2s\" (UniqueName: \"kubernetes.io/projected/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-kube-api-access-lmx2s\") pod \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " Mar 21 04:23:33 crc kubenswrapper[4923]: I0321 04:23:33.768530 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-installation-pull-secrets\") pod \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " Mar 21 04:23:33 crc kubenswrapper[4923]: I0321 04:23:33.768566 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-trusted-ca\") pod \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " Mar 21 04:23:33 crc kubenswrapper[4923]: I0321 04:23:33.768599 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-registry-tls\") pod \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " Mar 21 04:23:33 crc kubenswrapper[4923]: I0321 04:23:33.768648 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-registry-certificates\") pod \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " Mar 21 04:23:33 crc kubenswrapper[4923]: I0321 04:23:33.768697 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-ca-trust-extracted\") pod \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\" (UID: \"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8\") " Mar 21 04:23:33 crc kubenswrapper[4923]: I0321 04:23:33.769652 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:23:33 crc kubenswrapper[4923]: I0321 04:23:33.769843 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:23:33 crc kubenswrapper[4923]: I0321 04:23:33.774148 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-kube-api-access-lmx2s" (OuterVolumeSpecName: "kube-api-access-lmx2s") pod "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8"). InnerVolumeSpecName "kube-api-access-lmx2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:23:33 crc kubenswrapper[4923]: I0321 04:23:33.774680 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:23:33 crc kubenswrapper[4923]: I0321 04:23:33.774687 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:23:33 crc kubenswrapper[4923]: I0321 04:23:33.781552 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:23:33 crc kubenswrapper[4923]: I0321 04:23:33.784877 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 21 04:23:33 crc kubenswrapper[4923]: I0321 04:23:33.796525 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8" (UID: "b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:23:33 crc kubenswrapper[4923]: I0321 04:23:33.869877 4923 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 21 04:23:33 crc kubenswrapper[4923]: I0321 04:23:33.869913 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmx2s\" (UniqueName: \"kubernetes.io/projected/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-kube-api-access-lmx2s\") on node \"crc\" DevicePath \"\"" Mar 21 04:23:33 crc kubenswrapper[4923]: I0321 04:23:33.869931 4923 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 21 04:23:33 crc kubenswrapper[4923]: I0321 04:23:33.869942 4923 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 21 04:23:33 crc kubenswrapper[4923]: I0321 04:23:33.869952 4923 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 21 04:23:33 crc kubenswrapper[4923]: I0321 04:23:33.869962 4923 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 21 04:23:33 crc kubenswrapper[4923]: I0321 04:23:33.869972 4923 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 21 04:23:34 crc kubenswrapper[4923]: I0321 04:23:34.176201 4923 generic.go:334] "Generic (PLEG): container finished" podID="b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8" containerID="9c7a4d29bc51132aec4fc84826c2b44cbd5c77e441512a2010cf079c420558d8" exitCode=0 Mar 21 04:23:34 crc kubenswrapper[4923]: I0321 04:23:34.176278 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" event={"ID":"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8","Type":"ContainerDied","Data":"9c7a4d29bc51132aec4fc84826c2b44cbd5c77e441512a2010cf079c420558d8"} Mar 21 04:23:34 crc kubenswrapper[4923]: I0321 04:23:34.176349 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" event={"ID":"b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8","Type":"ContainerDied","Data":"dc60f6ca6b6c2c569fb18fb3fef50a3bf0c0abebe699a49d89f7be74d3c01cb6"} Mar 21 04:23:34 crc kubenswrapper[4923]: I0321 04:23:34.176384 4923 scope.go:117] "RemoveContainer" containerID="9c7a4d29bc51132aec4fc84826c2b44cbd5c77e441512a2010cf079c420558d8" Mar 21 04:23:34 crc kubenswrapper[4923]: I0321 04:23:34.176999 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bmnqq" Mar 21 04:23:34 crc kubenswrapper[4923]: I0321 04:23:34.204970 4923 scope.go:117] "RemoveContainer" containerID="9c7a4d29bc51132aec4fc84826c2b44cbd5c77e441512a2010cf079c420558d8" Mar 21 04:23:34 crc kubenswrapper[4923]: E0321 04:23:34.205469 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c7a4d29bc51132aec4fc84826c2b44cbd5c77e441512a2010cf079c420558d8\": container with ID starting with 9c7a4d29bc51132aec4fc84826c2b44cbd5c77e441512a2010cf079c420558d8 not found: ID does not exist" containerID="9c7a4d29bc51132aec4fc84826c2b44cbd5c77e441512a2010cf079c420558d8" Mar 21 04:23:34 crc kubenswrapper[4923]: I0321 04:23:34.205509 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c7a4d29bc51132aec4fc84826c2b44cbd5c77e441512a2010cf079c420558d8"} err="failed to get container status \"9c7a4d29bc51132aec4fc84826c2b44cbd5c77e441512a2010cf079c420558d8\": rpc error: code = NotFound desc = could not find container \"9c7a4d29bc51132aec4fc84826c2b44cbd5c77e441512a2010cf079c420558d8\": container with ID starting with 9c7a4d29bc51132aec4fc84826c2b44cbd5c77e441512a2010cf079c420558d8 not found: ID does not exist" Mar 21 04:23:34 crc kubenswrapper[4923]: I0321 04:23:34.220913 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bmnqq"] Mar 21 04:23:34 crc kubenswrapper[4923]: I0321 04:23:34.230948 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bmnqq"] Mar 21 04:23:34 crc kubenswrapper[4923]: I0321 04:23:34.267747 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2s8wr" Mar 21 04:23:34 crc kubenswrapper[4923]: I0321 04:23:34.312611 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2s8wr" Mar 21 04:23:34 crc kubenswrapper[4923]: I0321 04:23:34.368125 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8" path="/var/lib/kubelet/pods/b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8/volumes" Mar 21 04:24:00 crc kubenswrapper[4923]: I0321 04:24:00.175936 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567784-ls96m"] Mar 21 04:24:00 crc kubenswrapper[4923]: E0321 04:24:00.176863 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8" containerName="registry" Mar 21 04:24:00 crc kubenswrapper[4923]: I0321 04:24:00.176886 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8" containerName="registry" Mar 21 04:24:00 crc kubenswrapper[4923]: I0321 04:24:00.177066 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2327f6c-aeb7-4fa3-bfe2-4a27dda3bad8" containerName="registry" Mar 21 04:24:00 crc kubenswrapper[4923]: I0321 04:24:00.177651 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567784-ls96m" Mar 21 04:24:00 crc kubenswrapper[4923]: I0321 04:24:00.180793 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 04:24:00 crc kubenswrapper[4923]: I0321 04:24:00.181377 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 04:24:00 crc kubenswrapper[4923]: I0321 04:24:00.182291 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8trfl" Mar 21 04:24:00 crc kubenswrapper[4923]: I0321 04:24:00.189434 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567784-ls96m"] Mar 21 04:24:00 crc kubenswrapper[4923]: I0321 04:24:00.245385 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjn9z\" (UniqueName: \"kubernetes.io/projected/bc6afb5b-1e70-40d0-9ff9-8e6dafc40c00-kube-api-access-qjn9z\") pod \"auto-csr-approver-29567784-ls96m\" (UID: \"bc6afb5b-1e70-40d0-9ff9-8e6dafc40c00\") " pod="openshift-infra/auto-csr-approver-29567784-ls96m" Mar 21 04:24:00 crc kubenswrapper[4923]: I0321 04:24:00.346635 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjn9z\" (UniqueName: \"kubernetes.io/projected/bc6afb5b-1e70-40d0-9ff9-8e6dafc40c00-kube-api-access-qjn9z\") pod \"auto-csr-approver-29567784-ls96m\" (UID: \"bc6afb5b-1e70-40d0-9ff9-8e6dafc40c00\") " pod="openshift-infra/auto-csr-approver-29567784-ls96m" Mar 21 04:24:00 crc kubenswrapper[4923]: I0321 04:24:00.383417 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjn9z\" (UniqueName: \"kubernetes.io/projected/bc6afb5b-1e70-40d0-9ff9-8e6dafc40c00-kube-api-access-qjn9z\") pod \"auto-csr-approver-29567784-ls96m\" (UID: \"bc6afb5b-1e70-40d0-9ff9-8e6dafc40c00\") " pod="openshift-infra/auto-csr-approver-29567784-ls96m" Mar 21 04:24:00 crc kubenswrapper[4923]: I0321 04:24:00.503701 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567784-ls96m" Mar 21 04:24:01 crc kubenswrapper[4923]: I0321 04:24:01.012629 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567784-ls96m"] Mar 21 04:24:01 crc kubenswrapper[4923]: I0321 04:24:01.340682 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567784-ls96m" event={"ID":"bc6afb5b-1e70-40d0-9ff9-8e6dafc40c00","Type":"ContainerStarted","Data":"961edd35f0d51a1250033483b18ed0a206975823e3c9c831c6e26ca88d251715"} Mar 21 04:24:03 crc kubenswrapper[4923]: I0321 04:24:03.236008 4923 patch_prober.go:28] interesting pod/machine-config-daemon-cv5gr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 04:24:03 crc kubenswrapper[4923]: I0321 04:24:03.236561 4923 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 04:24:03 crc kubenswrapper[4923]: I0321 04:24:03.355671 4923 generic.go:334] "Generic (PLEG): container finished" podID="bc6afb5b-1e70-40d0-9ff9-8e6dafc40c00" containerID="7d0f4682a23456b75849130895040460941e184ed48c9db2d2422bda5ee4f0f8" exitCode=0 Mar 21 04:24:03 crc kubenswrapper[4923]: I0321 04:24:03.355732 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567784-ls96m" event={"ID":"bc6afb5b-1e70-40d0-9ff9-8e6dafc40c00","Type":"ContainerDied","Data":"7d0f4682a23456b75849130895040460941e184ed48c9db2d2422bda5ee4f0f8"} Mar 21 04:24:04 crc kubenswrapper[4923]: I0321 04:24:04.693079 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567784-ls96m" Mar 21 04:24:04 crc kubenswrapper[4923]: I0321 04:24:04.813823 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjn9z\" (UniqueName: \"kubernetes.io/projected/bc6afb5b-1e70-40d0-9ff9-8e6dafc40c00-kube-api-access-qjn9z\") pod \"bc6afb5b-1e70-40d0-9ff9-8e6dafc40c00\" (UID: \"bc6afb5b-1e70-40d0-9ff9-8e6dafc40c00\") " Mar 21 04:24:04 crc kubenswrapper[4923]: I0321 04:24:04.823569 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc6afb5b-1e70-40d0-9ff9-8e6dafc40c00-kube-api-access-qjn9z" (OuterVolumeSpecName: "kube-api-access-qjn9z") pod "bc6afb5b-1e70-40d0-9ff9-8e6dafc40c00" (UID: "bc6afb5b-1e70-40d0-9ff9-8e6dafc40c00"). InnerVolumeSpecName "kube-api-access-qjn9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:24:04 crc kubenswrapper[4923]: I0321 04:24:04.915619 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjn9z\" (UniqueName: \"kubernetes.io/projected/bc6afb5b-1e70-40d0-9ff9-8e6dafc40c00-kube-api-access-qjn9z\") on node \"crc\" DevicePath \"\"" Mar 21 04:24:05 crc kubenswrapper[4923]: I0321 04:24:05.369977 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567784-ls96m" event={"ID":"bc6afb5b-1e70-40d0-9ff9-8e6dafc40c00","Type":"ContainerDied","Data":"961edd35f0d51a1250033483b18ed0a206975823e3c9c831c6e26ca88d251715"} Mar 21 04:24:05 crc kubenswrapper[4923]: I0321 04:24:05.370037 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="961edd35f0d51a1250033483b18ed0a206975823e3c9c831c6e26ca88d251715" Mar 21 04:24:05 crc kubenswrapper[4923]: I0321 04:24:05.370092 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567784-ls96m" Mar 21 04:24:33 crc kubenswrapper[4923]: I0321 04:24:33.235883 4923 patch_prober.go:28] interesting pod/machine-config-daemon-cv5gr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 04:24:33 crc kubenswrapper[4923]: I0321 04:24:33.237509 4923 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 04:25:03 crc kubenswrapper[4923]: I0321 04:25:03.235560 4923 patch_prober.go:28] interesting pod/machine-config-daemon-cv5gr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 04:25:03 crc kubenswrapper[4923]: I0321 04:25:03.236208 4923 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 04:25:03 crc kubenswrapper[4923]: I0321 04:25:03.236272 4923 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" Mar 21 04:25:03 crc kubenswrapper[4923]: I0321 04:25:03.237364 4923 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ff44cc77171b74283f670b22d040687ae1a94cc7da74c4f35593b3a93ed4a47d"} pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 21 04:25:03 crc kubenswrapper[4923]: I0321 04:25:03.237484 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" containerID="cri-o://ff44cc77171b74283f670b22d040687ae1a94cc7da74c4f35593b3a93ed4a47d" gracePeriod=600 Mar 21 04:25:03 crc kubenswrapper[4923]: I0321 04:25:03.784358 4923 generic.go:334] "Generic (PLEG): container finished" podID="34cdf206-b121-415c-ae40-21245192e724" containerID="ff44cc77171b74283f670b22d040687ae1a94cc7da74c4f35593b3a93ed4a47d" exitCode=0 Mar 21 04:25:03 crc kubenswrapper[4923]: I0321 04:25:03.784457 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" event={"ID":"34cdf206-b121-415c-ae40-21245192e724","Type":"ContainerDied","Data":"ff44cc77171b74283f670b22d040687ae1a94cc7da74c4f35593b3a93ed4a47d"} Mar 21 04:25:03 crc kubenswrapper[4923]: I0321 04:25:03.784607 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" event={"ID":"34cdf206-b121-415c-ae40-21245192e724","Type":"ContainerStarted","Data":"48f63297ab4f945959ffea8973b2bacfde5b52e3247e4ff6ad014d7031e1c7df"} Mar 21 04:25:03 crc kubenswrapper[4923]: I0321 04:25:03.784627 4923 scope.go:117] "RemoveContainer" containerID="76f6e7c342bff59611eaa01ba2ed57f1a12fe75dab0e15a48267ea824f439b6f" Mar 21 04:26:00 crc kubenswrapper[4923]: I0321 04:26:00.156445 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567786-dskdm"] Mar 21 04:26:00 crc kubenswrapper[4923]: E0321 04:26:00.157222 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc6afb5b-1e70-40d0-9ff9-8e6dafc40c00" containerName="oc" Mar 21 04:26:00 crc kubenswrapper[4923]: I0321 04:26:00.157234 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc6afb5b-1e70-40d0-9ff9-8e6dafc40c00" containerName="oc" Mar 21 04:26:00 crc kubenswrapper[4923]: I0321 04:26:00.157350 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc6afb5b-1e70-40d0-9ff9-8e6dafc40c00" containerName="oc" Mar 21 04:26:00 crc kubenswrapper[4923]: I0321 04:26:00.157772 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567786-dskdm" Mar 21 04:26:00 crc kubenswrapper[4923]: I0321 04:26:00.164790 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 04:26:00 crc kubenswrapper[4923]: I0321 04:26:00.164969 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 04:26:00 crc kubenswrapper[4923]: I0321 04:26:00.165354 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8trfl" Mar 21 04:26:00 crc kubenswrapper[4923]: I0321 04:26:00.170600 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567786-dskdm"] Mar 21 04:26:00 crc kubenswrapper[4923]: I0321 04:26:00.336745 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq9xm\" (UniqueName: \"kubernetes.io/projected/72d36ca2-41f9-43a2-a4f2-92eb9c9d947f-kube-api-access-gq9xm\") pod \"auto-csr-approver-29567786-dskdm\" (UID: \"72d36ca2-41f9-43a2-a4f2-92eb9c9d947f\") " pod="openshift-infra/auto-csr-approver-29567786-dskdm" Mar 21 04:26:00 crc kubenswrapper[4923]: I0321 04:26:00.438725 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq9xm\" (UniqueName: \"kubernetes.io/projected/72d36ca2-41f9-43a2-a4f2-92eb9c9d947f-kube-api-access-gq9xm\") pod \"auto-csr-approver-29567786-dskdm\" (UID: \"72d36ca2-41f9-43a2-a4f2-92eb9c9d947f\") " pod="openshift-infra/auto-csr-approver-29567786-dskdm" Mar 21 04:26:00 crc kubenswrapper[4923]: I0321 04:26:00.466567 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq9xm\" (UniqueName: \"kubernetes.io/projected/72d36ca2-41f9-43a2-a4f2-92eb9c9d947f-kube-api-access-gq9xm\") pod \"auto-csr-approver-29567786-dskdm\" (UID: \"72d36ca2-41f9-43a2-a4f2-92eb9c9d947f\") " pod="openshift-infra/auto-csr-approver-29567786-dskdm" Mar 21 04:26:00 crc kubenswrapper[4923]: I0321 04:26:00.492178 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567786-dskdm" Mar 21 04:26:00 crc kubenswrapper[4923]: I0321 04:26:00.712657 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567786-dskdm"] Mar 21 04:26:00 crc kubenswrapper[4923]: I0321 04:26:00.725547 4923 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 21 04:26:01 crc kubenswrapper[4923]: I0321 04:26:01.228920 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567786-dskdm" event={"ID":"72d36ca2-41f9-43a2-a4f2-92eb9c9d947f","Type":"ContainerStarted","Data":"1f44e5e6d7f6f136995161e53cb21074c02f49d047c082bf1fec3c5e18bc2214"} Mar 21 04:26:02 crc kubenswrapper[4923]: I0321 04:26:02.240125 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567786-dskdm" event={"ID":"72d36ca2-41f9-43a2-a4f2-92eb9c9d947f","Type":"ContainerStarted","Data":"3ab34c821665e75c8bcea6fcf5ac8e022849ddb87a1a9b51facf2a60a1d35060"} Mar 21 04:26:02 crc kubenswrapper[4923]: I0321 04:26:02.257830 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29567786-dskdm" podStartSLOduration=1.237227006 podStartE2EDuration="2.257806808s" podCreationTimestamp="2026-03-21 04:26:00 +0000 UTC" firstStartedPulling="2026-03-21 04:26:00.725250956 +0000 UTC m=+525.878262043" lastFinishedPulling="2026-03-21 04:26:01.745830718 +0000 UTC m=+526.898841845" observedRunningTime="2026-03-21 04:26:02.254120002 +0000 UTC m=+527.407131109" watchObservedRunningTime="2026-03-21 04:26:02.257806808 +0000 UTC m=+527.410817905" Mar 21 04:26:03 crc kubenswrapper[4923]: I0321 04:26:03.252070 4923 generic.go:334] "Generic (PLEG): container finished" podID="72d36ca2-41f9-43a2-a4f2-92eb9c9d947f" containerID="3ab34c821665e75c8bcea6fcf5ac8e022849ddb87a1a9b51facf2a60a1d35060" exitCode=0 Mar 21 04:26:03 crc kubenswrapper[4923]: I0321 04:26:03.252155 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567786-dskdm" event={"ID":"72d36ca2-41f9-43a2-a4f2-92eb9c9d947f","Type":"ContainerDied","Data":"3ab34c821665e75c8bcea6fcf5ac8e022849ddb87a1a9b51facf2a60a1d35060"} Mar 21 04:26:04 crc kubenswrapper[4923]: I0321 04:26:04.568031 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567786-dskdm" Mar 21 04:26:04 crc kubenswrapper[4923]: I0321 04:26:04.702044 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq9xm\" (UniqueName: \"kubernetes.io/projected/72d36ca2-41f9-43a2-a4f2-92eb9c9d947f-kube-api-access-gq9xm\") pod \"72d36ca2-41f9-43a2-a4f2-92eb9c9d947f\" (UID: \"72d36ca2-41f9-43a2-a4f2-92eb9c9d947f\") " Mar 21 04:26:04 crc kubenswrapper[4923]: I0321 04:26:04.711845 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72d36ca2-41f9-43a2-a4f2-92eb9c9d947f-kube-api-access-gq9xm" (OuterVolumeSpecName: "kube-api-access-gq9xm") pod "72d36ca2-41f9-43a2-a4f2-92eb9c9d947f" (UID: "72d36ca2-41f9-43a2-a4f2-92eb9c9d947f"). InnerVolumeSpecName "kube-api-access-gq9xm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:26:04 crc kubenswrapper[4923]: I0321 04:26:04.804160 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gq9xm\" (UniqueName: \"kubernetes.io/projected/72d36ca2-41f9-43a2-a4f2-92eb9c9d947f-kube-api-access-gq9xm\") on node \"crc\" DevicePath \"\"" Mar 21 04:26:05 crc kubenswrapper[4923]: I0321 04:26:05.268286 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567786-dskdm" event={"ID":"72d36ca2-41f9-43a2-a4f2-92eb9c9d947f","Type":"ContainerDied","Data":"1f44e5e6d7f6f136995161e53cb21074c02f49d047c082bf1fec3c5e18bc2214"} Mar 21 04:26:05 crc kubenswrapper[4923]: I0321 04:26:05.268350 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f44e5e6d7f6f136995161e53cb21074c02f49d047c082bf1fec3c5e18bc2214" Mar 21 04:26:05 crc kubenswrapper[4923]: I0321 04:26:05.268362 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567786-dskdm" Mar 21 04:26:05 crc kubenswrapper[4923]: I0321 04:26:05.318686 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567780-l4jnj"] Mar 21 04:26:05 crc kubenswrapper[4923]: I0321 04:26:05.321415 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567780-l4jnj"] Mar 21 04:26:06 crc kubenswrapper[4923]: I0321 04:26:06.369695 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd8587a2-6b9d-46d9-aa09-c726997f9681" path="/var/lib/kubelet/pods/dd8587a2-6b9d-46d9-aa09-c726997f9681/volumes" Mar 21 04:26:16 crc kubenswrapper[4923]: I0321 04:26:16.841288 4923 scope.go:117] "RemoveContainer" containerID="9a5dcb0b681fb55dc6a3dfd385784602c8151eaad32048242b377ef4ceb5844e" Mar 21 04:27:03 crc kubenswrapper[4923]: I0321 04:27:03.235540 4923 patch_prober.go:28] interesting pod/machine-config-daemon-cv5gr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 04:27:03 crc kubenswrapper[4923]: I0321 04:27:03.236149 4923 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 04:27:16 crc kubenswrapper[4923]: I0321 04:27:16.880069 4923 scope.go:117] "RemoveContainer" containerID="0bb5a00c56f27497688f44231653e776caa9673c3d8a657c36dc3a5f06867724" Mar 21 04:27:16 crc kubenswrapper[4923]: I0321 04:27:16.904699 4923 scope.go:117] "RemoveContainer" containerID="6ecd92432efd87b8d6c313e494fee0df2e7c9b6e9b2e4d2d914471e7f1811ff0" Mar 21 04:27:16 crc kubenswrapper[4923]: I0321 04:27:16.923387 4923 scope.go:117] "RemoveContainer" containerID="99600ca879154f694f6a2dcf96cd3009c8ef61b7c0fef63d979a2e1140df867f" Mar 21 04:27:16 crc kubenswrapper[4923]: I0321 04:27:16.974443 4923 scope.go:117] "RemoveContainer" containerID="c88da9b680adeecf068170339af886890e0774f7272ad159acfedfd05c754dd1" Mar 21 04:27:33 crc kubenswrapper[4923]: I0321 04:27:33.236130 4923 patch_prober.go:28] interesting pod/machine-config-daemon-cv5gr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 04:27:33 crc kubenswrapper[4923]: I0321 04:27:33.236600 4923 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 04:28:00 crc kubenswrapper[4923]: I0321 04:28:00.157647 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567788-6ztlt"] Mar 21 04:28:00 crc kubenswrapper[4923]: E0321 04:28:00.158834 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d36ca2-41f9-43a2-a4f2-92eb9c9d947f" containerName="oc" Mar 21 04:28:00 crc kubenswrapper[4923]: I0321 04:28:00.158862 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d36ca2-41f9-43a2-a4f2-92eb9c9d947f" containerName="oc" Mar 21 04:28:00 crc kubenswrapper[4923]: I0321 04:28:00.159129 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="72d36ca2-41f9-43a2-a4f2-92eb9c9d947f" containerName="oc" Mar 21 04:28:00 crc kubenswrapper[4923]: I0321 04:28:00.161568 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567788-6ztlt" Mar 21 04:28:00 crc kubenswrapper[4923]: I0321 04:28:00.166040 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 04:28:00 crc kubenswrapper[4923]: I0321 04:28:00.166114 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8trfl" Mar 21 04:28:00 crc kubenswrapper[4923]: I0321 04:28:00.166196 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 04:28:00 crc kubenswrapper[4923]: I0321 04:28:00.170819 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567788-6ztlt"] Mar 21 04:28:00 crc kubenswrapper[4923]: I0321 04:28:00.289162 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6krb\" (UniqueName: \"kubernetes.io/projected/208f0755-d10b-4b07-a191-dd2f2417f635-kube-api-access-n6krb\") pod \"auto-csr-approver-29567788-6ztlt\" (UID: \"208f0755-d10b-4b07-a191-dd2f2417f635\") " pod="openshift-infra/auto-csr-approver-29567788-6ztlt" Mar 21 04:28:00 crc kubenswrapper[4923]: I0321 04:28:00.390794 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6krb\" (UniqueName: \"kubernetes.io/projected/208f0755-d10b-4b07-a191-dd2f2417f635-kube-api-access-n6krb\") pod \"auto-csr-approver-29567788-6ztlt\" (UID: \"208f0755-d10b-4b07-a191-dd2f2417f635\") " pod="openshift-infra/auto-csr-approver-29567788-6ztlt" Mar 21 04:28:00 crc kubenswrapper[4923]: I0321 04:28:00.425612 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6krb\" (UniqueName: \"kubernetes.io/projected/208f0755-d10b-4b07-a191-dd2f2417f635-kube-api-access-n6krb\") pod \"auto-csr-approver-29567788-6ztlt\" (UID: \"208f0755-d10b-4b07-a191-dd2f2417f635\") " pod="openshift-infra/auto-csr-approver-29567788-6ztlt" Mar 21 04:28:00 crc kubenswrapper[4923]: I0321 04:28:00.493826 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567788-6ztlt" Mar 21 04:28:00 crc kubenswrapper[4923]: I0321 04:28:00.737300 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567788-6ztlt"] Mar 21 04:28:01 crc kubenswrapper[4923]: I0321 04:28:01.140759 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567788-6ztlt" event={"ID":"208f0755-d10b-4b07-a191-dd2f2417f635","Type":"ContainerStarted","Data":"bcc0ab514bb5d5c430b26184ea1cf8c382b4f80ba390b756a25a88e15e1baf0e"} Mar 21 04:28:03 crc kubenswrapper[4923]: I0321 04:28:03.161610 4923 generic.go:334] "Generic (PLEG): container finished" podID="208f0755-d10b-4b07-a191-dd2f2417f635" containerID="f5096b59ebf59fa8e817a02949451bfa1788bc19ac1a6491917b9c93eff61574" exitCode=0 Mar 21 04:28:03 crc kubenswrapper[4923]: I0321 04:28:03.161679 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567788-6ztlt" event={"ID":"208f0755-d10b-4b07-a191-dd2f2417f635","Type":"ContainerDied","Data":"f5096b59ebf59fa8e817a02949451bfa1788bc19ac1a6491917b9c93eff61574"} Mar 21 04:28:03 crc kubenswrapper[4923]: I0321 04:28:03.235861 4923 patch_prober.go:28] interesting pod/machine-config-daemon-cv5gr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 04:28:03 crc kubenswrapper[4923]: I0321 04:28:03.235959 4923 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 04:28:03 crc kubenswrapper[4923]: I0321 04:28:03.236023 4923 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" Mar 21 04:28:03 crc kubenswrapper[4923]: I0321 04:28:03.236790 4923 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"48f63297ab4f945959ffea8973b2bacfde5b52e3247e4ff6ad014d7031e1c7df"} pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 21 04:28:03 crc kubenswrapper[4923]: I0321 04:28:03.236903 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" containerID="cri-o://48f63297ab4f945959ffea8973b2bacfde5b52e3247e4ff6ad014d7031e1c7df" gracePeriod=600 Mar 21 04:28:04 crc kubenswrapper[4923]: I0321 04:28:04.173689 4923 generic.go:334] "Generic (PLEG): container finished" podID="34cdf206-b121-415c-ae40-21245192e724" containerID="48f63297ab4f945959ffea8973b2bacfde5b52e3247e4ff6ad014d7031e1c7df" exitCode=0 Mar 21 04:28:04 crc kubenswrapper[4923]: I0321 04:28:04.173791 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" event={"ID":"34cdf206-b121-415c-ae40-21245192e724","Type":"ContainerDied","Data":"48f63297ab4f945959ffea8973b2bacfde5b52e3247e4ff6ad014d7031e1c7df"} Mar 21 04:28:04 crc kubenswrapper[4923]: I0321 04:28:04.174679 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" event={"ID":"34cdf206-b121-415c-ae40-21245192e724","Type":"ContainerStarted","Data":"9ca9400ca4c664dfef2280af9aecd52b539e59a9f840922ab22c0e27838ee22c"} Mar 21 04:28:04 crc kubenswrapper[4923]: I0321 04:28:04.174717 4923 scope.go:117] "RemoveContainer" containerID="ff44cc77171b74283f670b22d040687ae1a94cc7da74c4f35593b3a93ed4a47d" Mar 21 04:28:04 crc kubenswrapper[4923]: I0321 04:28:04.476384 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567788-6ztlt" Mar 21 04:28:04 crc kubenswrapper[4923]: I0321 04:28:04.651149 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6krb\" (UniqueName: \"kubernetes.io/projected/208f0755-d10b-4b07-a191-dd2f2417f635-kube-api-access-n6krb\") pod \"208f0755-d10b-4b07-a191-dd2f2417f635\" (UID: \"208f0755-d10b-4b07-a191-dd2f2417f635\") " Mar 21 04:28:04 crc kubenswrapper[4923]: I0321 04:28:04.661674 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/208f0755-d10b-4b07-a191-dd2f2417f635-kube-api-access-n6krb" (OuterVolumeSpecName: "kube-api-access-n6krb") pod "208f0755-d10b-4b07-a191-dd2f2417f635" (UID: "208f0755-d10b-4b07-a191-dd2f2417f635"). InnerVolumeSpecName "kube-api-access-n6krb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:28:04 crc kubenswrapper[4923]: I0321 04:28:04.752896 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6krb\" (UniqueName: \"kubernetes.io/projected/208f0755-d10b-4b07-a191-dd2f2417f635-kube-api-access-n6krb\") on node \"crc\" DevicePath \"\"" Mar 21 04:28:05 crc kubenswrapper[4923]: I0321 04:28:05.188209 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567788-6ztlt" event={"ID":"208f0755-d10b-4b07-a191-dd2f2417f635","Type":"ContainerDied","Data":"bcc0ab514bb5d5c430b26184ea1cf8c382b4f80ba390b756a25a88e15e1baf0e"} Mar 21 04:28:05 crc kubenswrapper[4923]: I0321 04:28:05.188743 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcc0ab514bb5d5c430b26184ea1cf8c382b4f80ba390b756a25a88e15e1baf0e" Mar 21 04:28:05 crc kubenswrapper[4923]: I0321 04:28:05.188300 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567788-6ztlt" Mar 21 04:28:05 crc kubenswrapper[4923]: I0321 04:28:05.559392 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567782-b2xpp"] Mar 21 04:28:05 crc kubenswrapper[4923]: I0321 04:28:05.566706 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567782-b2xpp"] Mar 21 04:28:06 crc kubenswrapper[4923]: I0321 04:28:06.377585 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="717a82ca-ac10-41e8-9dae-ba2ce0aec329" path="/var/lib/kubelet/pods/717a82ca-ac10-41e8-9dae-ba2ce0aec329/volumes" Mar 21 04:28:17 crc kubenswrapper[4923]: I0321 04:28:17.030379 4923 scope.go:117] "RemoveContainer" containerID="01cda3afc2c88a1f73cc8dbfd9ec31a44d2c09a620bf520936c20ba613690756" Mar 21 04:29:39 crc kubenswrapper[4923]: I0321 04:29:39.716856 4923 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 21 04:29:51 crc kubenswrapper[4923]: I0321 04:29:51.726057 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-df6ks"] Mar 21 04:29:51 crc kubenswrapper[4923]: I0321 04:29:51.727316 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="ovn-controller" containerID="cri-o://6fcdcc1d3637b595c58a13d00649f4fdbe3df085b7de5ff51cca91699ea51f55" gracePeriod=30 Mar 21 04:29:51 crc kubenswrapper[4923]: I0321 04:29:51.727393 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="nbdb" containerID="cri-o://678d94e877125f7c10a246da944ce794e58c872e87b8a5ca503a7b6bc22d9db6" gracePeriod=30 Mar 21 04:29:51 crc kubenswrapper[4923]: I0321 04:29:51.727498 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="northd" containerID="cri-o://5f2b5b675bef5dc6b9f41659e2d5874dfcf3f532b758c9814c29856752570d6e" gracePeriod=30 Mar 21 04:29:51 crc kubenswrapper[4923]: I0321 04:29:51.727561 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://87832ecbdcba5c1b91eaf711730aa0ff33ffc94ed2611ed9abcde3b0fedd3aba" gracePeriod=30 Mar 21 04:29:51 crc kubenswrapper[4923]: I0321 04:29:51.727620 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="kube-rbac-proxy-node" containerID="cri-o://c1ea8207610413fd41a1bf8001be27bde97c42c2b9cd8be8e11f0407a10630f3" gracePeriod=30 Mar 21 04:29:51 crc kubenswrapper[4923]: I0321 04:29:51.727674 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="ovn-acl-logging" containerID="cri-o://d920fcdc6374ceb79d6f8ea705e06a52886603b2645de0f3a50220d08af1ccb4" gracePeriod=30 Mar 21 04:29:51 crc kubenswrapper[4923]: I0321 04:29:51.728052 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="sbdb" containerID="cri-o://6dac85e489ca5901df3ed56b84db84a1eeea9bc2154ff5a398ce00c5f0edf661" gracePeriod=30 Mar 21 04:29:51 crc kubenswrapper[4923]: I0321 04:29:51.778304 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="ovnkube-controller" containerID="cri-o://1eccbe980bac77244008c06eac3d1bad7762baa0fc1947c6df9f95fce761f647" gracePeriod=30 Mar 21 04:29:51 crc kubenswrapper[4923]: I0321 04:29:51.948927 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bxklc_b3c415c9-5270-474d-9361-3df6701f2b3e/kube-multus/0.log" Mar 21 04:29:51 crc kubenswrapper[4923]: I0321 04:29:51.948988 4923 generic.go:334] "Generic (PLEG): container finished" podID="b3c415c9-5270-474d-9361-3df6701f2b3e" containerID="d584ef4cfd06291da7aa4f0627f6a16a9fbd3891ede970538bc80bbc32dc8311" exitCode=2 Mar 21 04:29:51 crc kubenswrapper[4923]: I0321 04:29:51.949056 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bxklc" event={"ID":"b3c415c9-5270-474d-9361-3df6701f2b3e","Type":"ContainerDied","Data":"d584ef4cfd06291da7aa4f0627f6a16a9fbd3891ede970538bc80bbc32dc8311"} Mar 21 04:29:51 crc kubenswrapper[4923]: I0321 04:29:51.949561 4923 scope.go:117] "RemoveContainer" containerID="d584ef4cfd06291da7aa4f0627f6a16a9fbd3891ede970538bc80bbc32dc8311" Mar 21 04:29:51 crc kubenswrapper[4923]: I0321 04:29:51.955457 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-df6ks_95d61f2a-3f56-4d98-a1df-384973815163/ovn-acl-logging/0.log" Mar 21 04:29:51 crc kubenswrapper[4923]: I0321 04:29:51.959591 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-df6ks_95d61f2a-3f56-4d98-a1df-384973815163/ovn-controller/0.log" Mar 21 04:29:51 crc kubenswrapper[4923]: I0321 04:29:51.960018 4923 generic.go:334] "Generic (PLEG): container finished" podID="95d61f2a-3f56-4d98-a1df-384973815163" containerID="1eccbe980bac77244008c06eac3d1bad7762baa0fc1947c6df9f95fce761f647" exitCode=0 Mar 21 04:29:51 crc kubenswrapper[4923]: I0321 04:29:51.960049 4923 generic.go:334] "Generic (PLEG): container finished" podID="95d61f2a-3f56-4d98-a1df-384973815163" containerID="87832ecbdcba5c1b91eaf711730aa0ff33ffc94ed2611ed9abcde3b0fedd3aba" exitCode=0 Mar 21 04:29:51 crc kubenswrapper[4923]: I0321 04:29:51.960056 4923 generic.go:334] "Generic (PLEG): container finished" podID="95d61f2a-3f56-4d98-a1df-384973815163" containerID="c1ea8207610413fd41a1bf8001be27bde97c42c2b9cd8be8e11f0407a10630f3" exitCode=0 Mar 21 04:29:51 crc kubenswrapper[4923]: I0321 04:29:51.960062 4923 generic.go:334] "Generic (PLEG): container finished" podID="95d61f2a-3f56-4d98-a1df-384973815163" containerID="d920fcdc6374ceb79d6f8ea705e06a52886603b2645de0f3a50220d08af1ccb4" exitCode=143 Mar 21 04:29:51 crc kubenswrapper[4923]: I0321 04:29:51.960070 4923 generic.go:334] "Generic (PLEG): container finished" podID="95d61f2a-3f56-4d98-a1df-384973815163" containerID="6fcdcc1d3637b595c58a13d00649f4fdbe3df085b7de5ff51cca91699ea51f55" exitCode=143 Mar 21 04:29:51 crc kubenswrapper[4923]: I0321 04:29:51.960091 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" event={"ID":"95d61f2a-3f56-4d98-a1df-384973815163","Type":"ContainerDied","Data":"1eccbe980bac77244008c06eac3d1bad7762baa0fc1947c6df9f95fce761f647"} Mar 21 04:29:51 crc kubenswrapper[4923]: I0321 04:29:51.960116 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" event={"ID":"95d61f2a-3f56-4d98-a1df-384973815163","Type":"ContainerDied","Data":"87832ecbdcba5c1b91eaf711730aa0ff33ffc94ed2611ed9abcde3b0fedd3aba"} Mar 21 04:29:51 crc kubenswrapper[4923]: I0321 04:29:51.960126 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" event={"ID":"95d61f2a-3f56-4d98-a1df-384973815163","Type":"ContainerDied","Data":"c1ea8207610413fd41a1bf8001be27bde97c42c2b9cd8be8e11f0407a10630f3"} Mar 21 04:29:51 crc kubenswrapper[4923]: I0321 04:29:51.960135 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" event={"ID":"95d61f2a-3f56-4d98-a1df-384973815163","Type":"ContainerDied","Data":"d920fcdc6374ceb79d6f8ea705e06a52886603b2645de0f3a50220d08af1ccb4"} Mar 21 04:29:51 crc kubenswrapper[4923]: I0321 04:29:51.960144 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" event={"ID":"95d61f2a-3f56-4d98-a1df-384973815163","Type":"ContainerDied","Data":"6fcdcc1d3637b595c58a13d00649f4fdbe3df085b7de5ff51cca91699ea51f55"} Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.036596 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-df6ks_95d61f2a-3f56-4d98-a1df-384973815163/ovn-acl-logging/0.log" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.037429 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-df6ks_95d61f2a-3f56-4d98-a1df-384973815163/ovn-controller/0.log" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.037840 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.093957 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gx9n6"] Mar 21 04:29:52 crc kubenswrapper[4923]: E0321 04:29:52.094174 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="ovn-controller" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.094194 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="ovn-controller" Mar 21 04:29:52 crc kubenswrapper[4923]: E0321 04:29:52.094210 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="208f0755-d10b-4b07-a191-dd2f2417f635" containerName="oc" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.094220 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="208f0755-d10b-4b07-a191-dd2f2417f635" containerName="oc" Mar 21 04:29:52 crc kubenswrapper[4923]: E0321 04:29:52.094235 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="ovnkube-controller" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.094243 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="ovnkube-controller" Mar 21 04:29:52 crc kubenswrapper[4923]: E0321 04:29:52.094252 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="sbdb" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.094260 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="sbdb" Mar 21 04:29:52 crc kubenswrapper[4923]: E0321 04:29:52.094274 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="nbdb" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.094282 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="nbdb" Mar 21 04:29:52 crc kubenswrapper[4923]: E0321 04:29:52.094295 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="kubecfg-setup" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.094303 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="kubecfg-setup" Mar 21 04:29:52 crc kubenswrapper[4923]: E0321 04:29:52.094336 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="northd" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.094345 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="northd" Mar 21 04:29:52 crc kubenswrapper[4923]: E0321 04:29:52.094357 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="kube-rbac-proxy-ovn-metrics" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.094365 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="kube-rbac-proxy-ovn-metrics" Mar 21 04:29:52 crc kubenswrapper[4923]: E0321 04:29:52.094378 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="ovn-acl-logging" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.094387 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="ovn-acl-logging" Mar 21 04:29:52 crc kubenswrapper[4923]: E0321 04:29:52.094399 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="kube-rbac-proxy-node" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.094537 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="kube-rbac-proxy-node" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.094774 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="nbdb" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.094789 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="ovnkube-controller" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.094799 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="sbdb" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.094811 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="208f0755-d10b-4b07-a191-dd2f2417f635" containerName="oc" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.094823 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="kube-rbac-proxy-ovn-metrics" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.094836 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="ovn-acl-logging" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.094846 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="ovn-controller" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.094870 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="kube-rbac-proxy-node" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.094886 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="95d61f2a-3f56-4d98-a1df-384973815163" containerName="northd" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.097086 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169091 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-node-log\") pod \"95d61f2a-3f56-4d98-a1df-384973815163\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169140 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-run-openvswitch\") pod \"95d61f2a-3f56-4d98-a1df-384973815163\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169182 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/95d61f2a-3f56-4d98-a1df-384973815163-ovn-node-metrics-cert\") pod \"95d61f2a-3f56-4d98-a1df-384973815163\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169205 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-cni-netd\") pod \"95d61f2a-3f56-4d98-a1df-384973815163\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169220 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-node-log" (OuterVolumeSpecName: "node-log") pod "95d61f2a-3f56-4d98-a1df-384973815163" (UID: "95d61f2a-3f56-4d98-a1df-384973815163"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169238 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-var-lib-cni-networks-ovn-kubernetes\") pod \"95d61f2a-3f56-4d98-a1df-384973815163\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169264 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-etc-openvswitch\") pod \"95d61f2a-3f56-4d98-a1df-384973815163\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169298 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/95d61f2a-3f56-4d98-a1df-384973815163-ovnkube-script-lib\") pod \"95d61f2a-3f56-4d98-a1df-384973815163\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169298 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "95d61f2a-3f56-4d98-a1df-384973815163" (UID: "95d61f2a-3f56-4d98-a1df-384973815163"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169338 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-run-ovn-kubernetes\") pod \"95d61f2a-3f56-4d98-a1df-384973815163\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169367 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "95d61f2a-3f56-4d98-a1df-384973815163" (UID: "95d61f2a-3f56-4d98-a1df-384973815163"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169420 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-run-netns\") pod \"95d61f2a-3f56-4d98-a1df-384973815163\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169413 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "95d61f2a-3f56-4d98-a1df-384973815163" (UID: "95d61f2a-3f56-4d98-a1df-384973815163"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169444 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "95d61f2a-3f56-4d98-a1df-384973815163" (UID: "95d61f2a-3f56-4d98-a1df-384973815163"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169383 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "95d61f2a-3f56-4d98-a1df-384973815163" (UID: "95d61f2a-3f56-4d98-a1df-384973815163"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169474 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-log-socket" (OuterVolumeSpecName: "log-socket") pod "95d61f2a-3f56-4d98-a1df-384973815163" (UID: "95d61f2a-3f56-4d98-a1df-384973815163"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169392 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "95d61f2a-3f56-4d98-a1df-384973815163" (UID: "95d61f2a-3f56-4d98-a1df-384973815163"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169447 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-log-socket\") pod \"95d61f2a-3f56-4d98-a1df-384973815163\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169539 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-run-ovn\") pod \"95d61f2a-3f56-4d98-a1df-384973815163\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169575 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-slash\") pod \"95d61f2a-3f56-4d98-a1df-384973815163\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169620 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/95d61f2a-3f56-4d98-a1df-384973815163-ovnkube-config\") pod \"95d61f2a-3f56-4d98-a1df-384973815163\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169641 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-var-lib-openvswitch\") pod \"95d61f2a-3f56-4d98-a1df-384973815163\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169663 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-kubelet\") pod \"95d61f2a-3f56-4d98-a1df-384973815163\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169696 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/95d61f2a-3f56-4d98-a1df-384973815163-env-overrides\") pod \"95d61f2a-3f56-4d98-a1df-384973815163\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169715 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-systemd-units\") pod \"95d61f2a-3f56-4d98-a1df-384973815163\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169737 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-run-systemd\") pod \"95d61f2a-3f56-4d98-a1df-384973815163\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169757 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-cni-bin\") pod \"95d61f2a-3f56-4d98-a1df-384973815163\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169780 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95d61f2a-3f56-4d98-a1df-384973815163-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "95d61f2a-3f56-4d98-a1df-384973815163" (UID: "95d61f2a-3f56-4d98-a1df-384973815163"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169792 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk9cv\" (UniqueName: \"kubernetes.io/projected/95d61f2a-3f56-4d98-a1df-384973815163-kube-api-access-dk9cv\") pod \"95d61f2a-3f56-4d98-a1df-384973815163\" (UID: \"95d61f2a-3f56-4d98-a1df-384973815163\") " Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169820 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "95d61f2a-3f56-4d98-a1df-384973815163" (UID: "95d61f2a-3f56-4d98-a1df-384973815163"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169848 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "95d61f2a-3f56-4d98-a1df-384973815163" (UID: "95d61f2a-3f56-4d98-a1df-384973815163"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.169876 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-slash" (OuterVolumeSpecName: "host-slash") pod "95d61f2a-3f56-4d98-a1df-384973815163" (UID: "95d61f2a-3f56-4d98-a1df-384973815163"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.170164 4923 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-slash\") on node \"crc\" DevicePath \"\"" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.170182 4923 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-kubelet\") on node \"crc\" DevicePath \"\"" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.170198 4923 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-node-log\") on node \"crc\" DevicePath \"\"" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.170209 4923 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-run-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.170220 4923 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-cni-netd\") on node \"crc\" DevicePath \"\"" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.170235 4923 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.170246 4923 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.170257 4923 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/95d61f2a-3f56-4d98-a1df-384973815163-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.170269 4923 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.170282 4923 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-run-netns\") on node \"crc\" DevicePath \"\"" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.170293 4923 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-log-socket\") on node \"crc\" DevicePath \"\"" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.170305 4923 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.170158 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "95d61f2a-3f56-4d98-a1df-384973815163" (UID: "95d61f2a-3f56-4d98-a1df-384973815163"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.170280 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95d61f2a-3f56-4d98-a1df-384973815163-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "95d61f2a-3f56-4d98-a1df-384973815163" (UID: "95d61f2a-3f56-4d98-a1df-384973815163"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.170372 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "95d61f2a-3f56-4d98-a1df-384973815163" (UID: "95d61f2a-3f56-4d98-a1df-384973815163"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.170446 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "95d61f2a-3f56-4d98-a1df-384973815163" (UID: "95d61f2a-3f56-4d98-a1df-384973815163"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.170652 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95d61f2a-3f56-4d98-a1df-384973815163-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "95d61f2a-3f56-4d98-a1df-384973815163" (UID: "95d61f2a-3f56-4d98-a1df-384973815163"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.175881 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95d61f2a-3f56-4d98-a1df-384973815163-kube-api-access-dk9cv" (OuterVolumeSpecName: "kube-api-access-dk9cv") pod "95d61f2a-3f56-4d98-a1df-384973815163" (UID: "95d61f2a-3f56-4d98-a1df-384973815163"). InnerVolumeSpecName "kube-api-access-dk9cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.177123 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95d61f2a-3f56-4d98-a1df-384973815163-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "95d61f2a-3f56-4d98-a1df-384973815163" (UID: "95d61f2a-3f56-4d98-a1df-384973815163"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.191950 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "95d61f2a-3f56-4d98-a1df-384973815163" (UID: "95d61f2a-3f56-4d98-a1df-384973815163"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.271850 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ec3c3ace-e26b-4a78-9f67-c02cea959a03-ovnkube-script-lib\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.271933 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-etc-openvswitch\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.271994 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r82vj\" (UniqueName: \"kubernetes.io/projected/ec3c3ace-e26b-4a78-9f67-c02cea959a03-kube-api-access-r82vj\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.272038 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ec3c3ace-e26b-4a78-9f67-c02cea959a03-ovnkube-config\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.272178 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-host-cni-netd\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.272252 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-node-log\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.272318 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-log-socket\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.272384 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-host-cni-bin\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.272425 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-run-openvswitch\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.272498 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ec3c3ace-e26b-4a78-9f67-c02cea959a03-env-overrides\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.272538 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-run-ovn\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.272577 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.272618 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-var-lib-openvswitch\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.272652 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-systemd-units\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.272692 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-host-run-ovn-kubernetes\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.272732 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-host-run-netns\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.272760 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-run-systemd\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.272794 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ec3c3ace-e26b-4a78-9f67-c02cea959a03-ovn-node-metrics-cert\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.272822 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-host-slash\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.272886 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-host-kubelet\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.272965 4923 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/95d61f2a-3f56-4d98-a1df-384973815163-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.272994 4923 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-systemd-units\") on node \"crc\" DevicePath \"\"" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.273019 4923 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-run-systemd\") on node \"crc\" DevicePath \"\"" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.273076 4923 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-host-cni-bin\") on node \"crc\" DevicePath \"\"" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.273094 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk9cv\" (UniqueName: \"kubernetes.io/projected/95d61f2a-3f56-4d98-a1df-384973815163-kube-api-access-dk9cv\") on node \"crc\" DevicePath \"\"" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.273112 4923 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/95d61f2a-3f56-4d98-a1df-384973815163-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.273146 4923 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/95d61f2a-3f56-4d98-a1df-384973815163-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.273163 4923 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/95d61f2a-3f56-4d98-a1df-384973815163-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.374154 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-host-cni-bin\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.374208 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-run-openvswitch\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.374238 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ec3c3ace-e26b-4a78-9f67-c02cea959a03-env-overrides\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.374261 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-run-ovn\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.374291 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.374339 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-systemd-units\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.374354 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-host-cni-bin\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.374401 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-var-lib-openvswitch\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.374366 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-var-lib-openvswitch\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.374445 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-run-ovn\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.374454 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-run-openvswitch\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.374475 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-systemd-units\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.374500 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.374536 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-host-run-ovn-kubernetes\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.374583 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-host-run-ovn-kubernetes\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.374635 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-host-run-netns\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.374674 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-run-systemd\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.374714 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-host-slash\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.374757 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ec3c3ace-e26b-4a78-9f67-c02cea959a03-ovn-node-metrics-cert\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.374824 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-host-kubelet\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.374858 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-host-slash\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.374860 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ec3c3ace-e26b-4a78-9f67-c02cea959a03-ovnkube-script-lib\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.374888 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-host-run-netns\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.374932 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-etc-openvswitch\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.374968 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-host-kubelet\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.374980 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r82vj\" (UniqueName: \"kubernetes.io/projected/ec3c3ace-e26b-4a78-9f67-c02cea959a03-kube-api-access-r82vj\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.375013 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-etc-openvswitch\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.374824 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-run-systemd\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.375025 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ec3c3ace-e26b-4a78-9f67-c02cea959a03-ovnkube-config\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.375124 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-host-cni-netd\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.375163 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-node-log\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.375226 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-log-socket\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.375355 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-log-socket\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.375403 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-host-cni-netd\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.375445 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ec3c3ace-e26b-4a78-9f67-c02cea959a03-node-log\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.375545 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ec3c3ace-e26b-4a78-9f67-c02cea959a03-env-overrides\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.375932 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ec3c3ace-e26b-4a78-9f67-c02cea959a03-ovnkube-script-lib\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.376071 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ec3c3ace-e26b-4a78-9f67-c02cea959a03-ovnkube-config\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.378560 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ec3c3ace-e26b-4a78-9f67-c02cea959a03-ovn-node-metrics-cert\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.403579 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r82vj\" (UniqueName: \"kubernetes.io/projected/ec3c3ace-e26b-4a78-9f67-c02cea959a03-kube-api-access-r82vj\") pod \"ovnkube-node-gx9n6\" (UID: \"ec3c3ace-e26b-4a78-9f67-c02cea959a03\") " pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.410710 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:52 crc kubenswrapper[4923]: W0321 04:29:52.451571 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec3c3ace_e26b_4a78_9f67_c02cea959a03.slice/crio-5be0c48d40f372ca1b997d97e572054cf8a5f628b63d16df74259268a8ce12cf WatchSource:0}: Error finding container 5be0c48d40f372ca1b997d97e572054cf8a5f628b63d16df74259268a8ce12cf: Status 404 returned error can't find the container with id 5be0c48d40f372ca1b997d97e572054cf8a5f628b63d16df74259268a8ce12cf Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.968874 4923 generic.go:334] "Generic (PLEG): container finished" podID="ec3c3ace-e26b-4a78-9f67-c02cea959a03" containerID="988cbab6632dec1b66e66c07080a873aabb6a7cdca4f7bf45df41a0422d688ca" exitCode=0 Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.969022 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" event={"ID":"ec3c3ace-e26b-4a78-9f67-c02cea959a03","Type":"ContainerDied","Data":"988cbab6632dec1b66e66c07080a873aabb6a7cdca4f7bf45df41a0422d688ca"} Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.969397 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" event={"ID":"ec3c3ace-e26b-4a78-9f67-c02cea959a03","Type":"ContainerStarted","Data":"5be0c48d40f372ca1b997d97e572054cf8a5f628b63d16df74259268a8ce12cf"} Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.990176 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-df6ks_95d61f2a-3f56-4d98-a1df-384973815163/ovn-acl-logging/0.log" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.992266 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-df6ks_95d61f2a-3f56-4d98-a1df-384973815163/ovn-controller/0.log" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.993734 4923 generic.go:334] "Generic (PLEG): container finished" podID="95d61f2a-3f56-4d98-a1df-384973815163" containerID="6dac85e489ca5901df3ed56b84db84a1eeea9bc2154ff5a398ce00c5f0edf661" exitCode=0 Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.993774 4923 generic.go:334] "Generic (PLEG): container finished" podID="95d61f2a-3f56-4d98-a1df-384973815163" containerID="678d94e877125f7c10a246da944ce794e58c872e87b8a5ca503a7b6bc22d9db6" exitCode=0 Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.993795 4923 generic.go:334] "Generic (PLEG): container finished" podID="95d61f2a-3f56-4d98-a1df-384973815163" containerID="5f2b5b675bef5dc6b9f41659e2d5874dfcf3f532b758c9814c29856752570d6e" exitCode=0 Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.993915 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" event={"ID":"95d61f2a-3f56-4d98-a1df-384973815163","Type":"ContainerDied","Data":"6dac85e489ca5901df3ed56b84db84a1eeea9bc2154ff5a398ce00c5f0edf661"} Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.994051 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" event={"ID":"95d61f2a-3f56-4d98-a1df-384973815163","Type":"ContainerDied","Data":"678d94e877125f7c10a246da944ce794e58c872e87b8a5ca503a7b6bc22d9db6"} Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.994082 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" event={"ID":"95d61f2a-3f56-4d98-a1df-384973815163","Type":"ContainerDied","Data":"5f2b5b675bef5dc6b9f41659e2d5874dfcf3f532b758c9814c29856752570d6e"} Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.994110 4923 scope.go:117] "RemoveContainer" containerID="1eccbe980bac77244008c06eac3d1bad7762baa0fc1947c6df9f95fce761f647" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.994124 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" event={"ID":"95d61f2a-3f56-4d98-a1df-384973815163","Type":"ContainerDied","Data":"326b4a5a50855478d5e1c8c394e8a09c035423c177b683af529da1d6145ee7b5"} Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.994080 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-df6ks" Mar 21 04:29:52 crc kubenswrapper[4923]: I0321 04:29:52.999042 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bxklc_b3c415c9-5270-474d-9361-3df6701f2b3e/kube-multus/0.log" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:52.999202 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bxklc" event={"ID":"b3c415c9-5270-474d-9361-3df6701f2b3e","Type":"ContainerStarted","Data":"ff4181cd4ee7ec0505e3778e70ab7a3dcb163cf9ebb5e06d99cc642c7fc9686c"} Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.034066 4923 scope.go:117] "RemoveContainer" containerID="6dac85e489ca5901df3ed56b84db84a1eeea9bc2154ff5a398ce00c5f0edf661" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.069164 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-df6ks"] Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.077375 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-df6ks"] Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.078602 4923 scope.go:117] "RemoveContainer" containerID="678d94e877125f7c10a246da944ce794e58c872e87b8a5ca503a7b6bc22d9db6" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.106212 4923 scope.go:117] "RemoveContainer" containerID="5f2b5b675bef5dc6b9f41659e2d5874dfcf3f532b758c9814c29856752570d6e" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.130515 4923 scope.go:117] "RemoveContainer" containerID="87832ecbdcba5c1b91eaf711730aa0ff33ffc94ed2611ed9abcde3b0fedd3aba" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.152780 4923 scope.go:117] "RemoveContainer" containerID="c1ea8207610413fd41a1bf8001be27bde97c42c2b9cd8be8e11f0407a10630f3" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.171592 4923 scope.go:117] "RemoveContainer" containerID="d920fcdc6374ceb79d6f8ea705e06a52886603b2645de0f3a50220d08af1ccb4" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.196177 4923 scope.go:117] "RemoveContainer" containerID="6fcdcc1d3637b595c58a13d00649f4fdbe3df085b7de5ff51cca91699ea51f55" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.210470 4923 scope.go:117] "RemoveContainer" containerID="6ae4fdd694156a19a6d068ca04fe35f38352c3323a4b8bcfceac06e5be8879b3" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.240033 4923 scope.go:117] "RemoveContainer" containerID="1eccbe980bac77244008c06eac3d1bad7762baa0fc1947c6df9f95fce761f647" Mar 21 04:29:53 crc kubenswrapper[4923]: E0321 04:29:53.240538 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1eccbe980bac77244008c06eac3d1bad7762baa0fc1947c6df9f95fce761f647\": container with ID starting with 1eccbe980bac77244008c06eac3d1bad7762baa0fc1947c6df9f95fce761f647 not found: ID does not exist" containerID="1eccbe980bac77244008c06eac3d1bad7762baa0fc1947c6df9f95fce761f647" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.240569 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1eccbe980bac77244008c06eac3d1bad7762baa0fc1947c6df9f95fce761f647"} err="failed to get container status \"1eccbe980bac77244008c06eac3d1bad7762baa0fc1947c6df9f95fce761f647\": rpc error: code = NotFound desc = could not find container \"1eccbe980bac77244008c06eac3d1bad7762baa0fc1947c6df9f95fce761f647\": container with ID starting with 1eccbe980bac77244008c06eac3d1bad7762baa0fc1947c6df9f95fce761f647 not found: ID does not exist" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.240591 4923 scope.go:117] "RemoveContainer" containerID="6dac85e489ca5901df3ed56b84db84a1eeea9bc2154ff5a398ce00c5f0edf661" Mar 21 04:29:53 crc kubenswrapper[4923]: E0321 04:29:53.240923 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dac85e489ca5901df3ed56b84db84a1eeea9bc2154ff5a398ce00c5f0edf661\": container with ID starting with 6dac85e489ca5901df3ed56b84db84a1eeea9bc2154ff5a398ce00c5f0edf661 not found: ID does not exist" containerID="6dac85e489ca5901df3ed56b84db84a1eeea9bc2154ff5a398ce00c5f0edf661" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.240973 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dac85e489ca5901df3ed56b84db84a1eeea9bc2154ff5a398ce00c5f0edf661"} err="failed to get container status \"6dac85e489ca5901df3ed56b84db84a1eeea9bc2154ff5a398ce00c5f0edf661\": rpc error: code = NotFound desc = could not find container \"6dac85e489ca5901df3ed56b84db84a1eeea9bc2154ff5a398ce00c5f0edf661\": container with ID starting with 6dac85e489ca5901df3ed56b84db84a1eeea9bc2154ff5a398ce00c5f0edf661 not found: ID does not exist" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.241006 4923 scope.go:117] "RemoveContainer" containerID="678d94e877125f7c10a246da944ce794e58c872e87b8a5ca503a7b6bc22d9db6" Mar 21 04:29:53 crc kubenswrapper[4923]: E0321 04:29:53.241389 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"678d94e877125f7c10a246da944ce794e58c872e87b8a5ca503a7b6bc22d9db6\": container with ID starting with 678d94e877125f7c10a246da944ce794e58c872e87b8a5ca503a7b6bc22d9db6 not found: ID does not exist" containerID="678d94e877125f7c10a246da944ce794e58c872e87b8a5ca503a7b6bc22d9db6" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.241417 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"678d94e877125f7c10a246da944ce794e58c872e87b8a5ca503a7b6bc22d9db6"} err="failed to get container status \"678d94e877125f7c10a246da944ce794e58c872e87b8a5ca503a7b6bc22d9db6\": rpc error: code = NotFound desc = could not find container \"678d94e877125f7c10a246da944ce794e58c872e87b8a5ca503a7b6bc22d9db6\": container with ID starting with 678d94e877125f7c10a246da944ce794e58c872e87b8a5ca503a7b6bc22d9db6 not found: ID does not exist" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.241434 4923 scope.go:117] "RemoveContainer" containerID="5f2b5b675bef5dc6b9f41659e2d5874dfcf3f532b758c9814c29856752570d6e" Mar 21 04:29:53 crc kubenswrapper[4923]: E0321 04:29:53.241714 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f2b5b675bef5dc6b9f41659e2d5874dfcf3f532b758c9814c29856752570d6e\": container with ID starting with 5f2b5b675bef5dc6b9f41659e2d5874dfcf3f532b758c9814c29856752570d6e not found: ID does not exist" containerID="5f2b5b675bef5dc6b9f41659e2d5874dfcf3f532b758c9814c29856752570d6e" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.241742 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f2b5b675bef5dc6b9f41659e2d5874dfcf3f532b758c9814c29856752570d6e"} err="failed to get container status \"5f2b5b675bef5dc6b9f41659e2d5874dfcf3f532b758c9814c29856752570d6e\": rpc error: code = NotFound desc = could not find container \"5f2b5b675bef5dc6b9f41659e2d5874dfcf3f532b758c9814c29856752570d6e\": container with ID starting with 5f2b5b675bef5dc6b9f41659e2d5874dfcf3f532b758c9814c29856752570d6e not found: ID does not exist" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.241759 4923 scope.go:117] "RemoveContainer" containerID="87832ecbdcba5c1b91eaf711730aa0ff33ffc94ed2611ed9abcde3b0fedd3aba" Mar 21 04:29:53 crc kubenswrapper[4923]: E0321 04:29:53.242033 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87832ecbdcba5c1b91eaf711730aa0ff33ffc94ed2611ed9abcde3b0fedd3aba\": container with ID starting with 87832ecbdcba5c1b91eaf711730aa0ff33ffc94ed2611ed9abcde3b0fedd3aba not found: ID does not exist" containerID="87832ecbdcba5c1b91eaf711730aa0ff33ffc94ed2611ed9abcde3b0fedd3aba" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.242069 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87832ecbdcba5c1b91eaf711730aa0ff33ffc94ed2611ed9abcde3b0fedd3aba"} err="failed to get container status \"87832ecbdcba5c1b91eaf711730aa0ff33ffc94ed2611ed9abcde3b0fedd3aba\": rpc error: code = NotFound desc = could not find container \"87832ecbdcba5c1b91eaf711730aa0ff33ffc94ed2611ed9abcde3b0fedd3aba\": container with ID starting with 87832ecbdcba5c1b91eaf711730aa0ff33ffc94ed2611ed9abcde3b0fedd3aba not found: ID does not exist" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.242091 4923 scope.go:117] "RemoveContainer" containerID="c1ea8207610413fd41a1bf8001be27bde97c42c2b9cd8be8e11f0407a10630f3" Mar 21 04:29:53 crc kubenswrapper[4923]: E0321 04:29:53.242309 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1ea8207610413fd41a1bf8001be27bde97c42c2b9cd8be8e11f0407a10630f3\": container with ID starting with c1ea8207610413fd41a1bf8001be27bde97c42c2b9cd8be8e11f0407a10630f3 not found: ID does not exist" containerID="c1ea8207610413fd41a1bf8001be27bde97c42c2b9cd8be8e11f0407a10630f3" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.242351 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ea8207610413fd41a1bf8001be27bde97c42c2b9cd8be8e11f0407a10630f3"} err="failed to get container status \"c1ea8207610413fd41a1bf8001be27bde97c42c2b9cd8be8e11f0407a10630f3\": rpc error: code = NotFound desc = could not find container \"c1ea8207610413fd41a1bf8001be27bde97c42c2b9cd8be8e11f0407a10630f3\": container with ID starting with c1ea8207610413fd41a1bf8001be27bde97c42c2b9cd8be8e11f0407a10630f3 not found: ID does not exist" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.242371 4923 scope.go:117] "RemoveContainer" containerID="d920fcdc6374ceb79d6f8ea705e06a52886603b2645de0f3a50220d08af1ccb4" Mar 21 04:29:53 crc kubenswrapper[4923]: E0321 04:29:53.242636 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d920fcdc6374ceb79d6f8ea705e06a52886603b2645de0f3a50220d08af1ccb4\": container with ID starting with d920fcdc6374ceb79d6f8ea705e06a52886603b2645de0f3a50220d08af1ccb4 not found: ID does not exist" containerID="d920fcdc6374ceb79d6f8ea705e06a52886603b2645de0f3a50220d08af1ccb4" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.242688 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d920fcdc6374ceb79d6f8ea705e06a52886603b2645de0f3a50220d08af1ccb4"} err="failed to get container status \"d920fcdc6374ceb79d6f8ea705e06a52886603b2645de0f3a50220d08af1ccb4\": rpc error: code = NotFound desc = could not find container \"d920fcdc6374ceb79d6f8ea705e06a52886603b2645de0f3a50220d08af1ccb4\": container with ID starting with d920fcdc6374ceb79d6f8ea705e06a52886603b2645de0f3a50220d08af1ccb4 not found: ID does not exist" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.242728 4923 scope.go:117] "RemoveContainer" containerID="6fcdcc1d3637b595c58a13d00649f4fdbe3df085b7de5ff51cca91699ea51f55" Mar 21 04:29:53 crc kubenswrapper[4923]: E0321 04:29:53.243027 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fcdcc1d3637b595c58a13d00649f4fdbe3df085b7de5ff51cca91699ea51f55\": container with ID starting with 6fcdcc1d3637b595c58a13d00649f4fdbe3df085b7de5ff51cca91699ea51f55 not found: ID does not exist" containerID="6fcdcc1d3637b595c58a13d00649f4fdbe3df085b7de5ff51cca91699ea51f55" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.243051 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fcdcc1d3637b595c58a13d00649f4fdbe3df085b7de5ff51cca91699ea51f55"} err="failed to get container status \"6fcdcc1d3637b595c58a13d00649f4fdbe3df085b7de5ff51cca91699ea51f55\": rpc error: code = NotFound desc = could not find container \"6fcdcc1d3637b595c58a13d00649f4fdbe3df085b7de5ff51cca91699ea51f55\": container with ID starting with 6fcdcc1d3637b595c58a13d00649f4fdbe3df085b7de5ff51cca91699ea51f55 not found: ID does not exist" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.243065 4923 scope.go:117] "RemoveContainer" containerID="6ae4fdd694156a19a6d068ca04fe35f38352c3323a4b8bcfceac06e5be8879b3" Mar 21 04:29:53 crc kubenswrapper[4923]: E0321 04:29:53.243404 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ae4fdd694156a19a6d068ca04fe35f38352c3323a4b8bcfceac06e5be8879b3\": container with ID starting with 6ae4fdd694156a19a6d068ca04fe35f38352c3323a4b8bcfceac06e5be8879b3 not found: ID does not exist" containerID="6ae4fdd694156a19a6d068ca04fe35f38352c3323a4b8bcfceac06e5be8879b3" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.243438 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ae4fdd694156a19a6d068ca04fe35f38352c3323a4b8bcfceac06e5be8879b3"} err="failed to get container status \"6ae4fdd694156a19a6d068ca04fe35f38352c3323a4b8bcfceac06e5be8879b3\": rpc error: code = NotFound desc = could not find container \"6ae4fdd694156a19a6d068ca04fe35f38352c3323a4b8bcfceac06e5be8879b3\": container with ID starting with 6ae4fdd694156a19a6d068ca04fe35f38352c3323a4b8bcfceac06e5be8879b3 not found: ID does not exist" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.243458 4923 scope.go:117] "RemoveContainer" containerID="1eccbe980bac77244008c06eac3d1bad7762baa0fc1947c6df9f95fce761f647" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.243844 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1eccbe980bac77244008c06eac3d1bad7762baa0fc1947c6df9f95fce761f647"} err="failed to get container status \"1eccbe980bac77244008c06eac3d1bad7762baa0fc1947c6df9f95fce761f647\": rpc error: code = NotFound desc = could not find container \"1eccbe980bac77244008c06eac3d1bad7762baa0fc1947c6df9f95fce761f647\": container with ID starting with 1eccbe980bac77244008c06eac3d1bad7762baa0fc1947c6df9f95fce761f647 not found: ID does not exist" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.243871 4923 scope.go:117] "RemoveContainer" containerID="6dac85e489ca5901df3ed56b84db84a1eeea9bc2154ff5a398ce00c5f0edf661" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.244174 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dac85e489ca5901df3ed56b84db84a1eeea9bc2154ff5a398ce00c5f0edf661"} err="failed to get container status \"6dac85e489ca5901df3ed56b84db84a1eeea9bc2154ff5a398ce00c5f0edf661\": rpc error: code = NotFound desc = could not find container \"6dac85e489ca5901df3ed56b84db84a1eeea9bc2154ff5a398ce00c5f0edf661\": container with ID starting with 6dac85e489ca5901df3ed56b84db84a1eeea9bc2154ff5a398ce00c5f0edf661 not found: ID does not exist" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.244195 4923 scope.go:117] "RemoveContainer" containerID="678d94e877125f7c10a246da944ce794e58c872e87b8a5ca503a7b6bc22d9db6" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.244432 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"678d94e877125f7c10a246da944ce794e58c872e87b8a5ca503a7b6bc22d9db6"} err="failed to get container status \"678d94e877125f7c10a246da944ce794e58c872e87b8a5ca503a7b6bc22d9db6\": rpc error: code = NotFound desc = could not find container \"678d94e877125f7c10a246da944ce794e58c872e87b8a5ca503a7b6bc22d9db6\": container with ID starting with 678d94e877125f7c10a246da944ce794e58c872e87b8a5ca503a7b6bc22d9db6 not found: ID does not exist" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.244451 4923 scope.go:117] "RemoveContainer" containerID="5f2b5b675bef5dc6b9f41659e2d5874dfcf3f532b758c9814c29856752570d6e" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.244633 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f2b5b675bef5dc6b9f41659e2d5874dfcf3f532b758c9814c29856752570d6e"} err="failed to get container status \"5f2b5b675bef5dc6b9f41659e2d5874dfcf3f532b758c9814c29856752570d6e\": rpc error: code = NotFound desc = could not find container \"5f2b5b675bef5dc6b9f41659e2d5874dfcf3f532b758c9814c29856752570d6e\": container with ID starting with 5f2b5b675bef5dc6b9f41659e2d5874dfcf3f532b758c9814c29856752570d6e not found: ID does not exist" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.244651 4923 scope.go:117] "RemoveContainer" containerID="87832ecbdcba5c1b91eaf711730aa0ff33ffc94ed2611ed9abcde3b0fedd3aba" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.244827 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87832ecbdcba5c1b91eaf711730aa0ff33ffc94ed2611ed9abcde3b0fedd3aba"} err="failed to get container status \"87832ecbdcba5c1b91eaf711730aa0ff33ffc94ed2611ed9abcde3b0fedd3aba\": rpc error: code = NotFound desc = could not find container \"87832ecbdcba5c1b91eaf711730aa0ff33ffc94ed2611ed9abcde3b0fedd3aba\": container with ID starting with 87832ecbdcba5c1b91eaf711730aa0ff33ffc94ed2611ed9abcde3b0fedd3aba not found: ID does not exist" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.244873 4923 scope.go:117] "RemoveContainer" containerID="c1ea8207610413fd41a1bf8001be27bde97c42c2b9cd8be8e11f0407a10630f3" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.245172 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ea8207610413fd41a1bf8001be27bde97c42c2b9cd8be8e11f0407a10630f3"} err="failed to get container status \"c1ea8207610413fd41a1bf8001be27bde97c42c2b9cd8be8e11f0407a10630f3\": rpc error: code = NotFound desc = could not find container \"c1ea8207610413fd41a1bf8001be27bde97c42c2b9cd8be8e11f0407a10630f3\": container with ID starting with c1ea8207610413fd41a1bf8001be27bde97c42c2b9cd8be8e11f0407a10630f3 not found: ID does not exist" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.245192 4923 scope.go:117] "RemoveContainer" containerID="d920fcdc6374ceb79d6f8ea705e06a52886603b2645de0f3a50220d08af1ccb4" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.245582 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d920fcdc6374ceb79d6f8ea705e06a52886603b2645de0f3a50220d08af1ccb4"} err="failed to get container status \"d920fcdc6374ceb79d6f8ea705e06a52886603b2645de0f3a50220d08af1ccb4\": rpc error: code = NotFound desc = could not find container \"d920fcdc6374ceb79d6f8ea705e06a52886603b2645de0f3a50220d08af1ccb4\": container with ID starting with d920fcdc6374ceb79d6f8ea705e06a52886603b2645de0f3a50220d08af1ccb4 not found: ID does not exist" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.245621 4923 scope.go:117] "RemoveContainer" containerID="6fcdcc1d3637b595c58a13d00649f4fdbe3df085b7de5ff51cca91699ea51f55" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.245875 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fcdcc1d3637b595c58a13d00649f4fdbe3df085b7de5ff51cca91699ea51f55"} err="failed to get container status \"6fcdcc1d3637b595c58a13d00649f4fdbe3df085b7de5ff51cca91699ea51f55\": rpc error: code = NotFound desc = could not find container \"6fcdcc1d3637b595c58a13d00649f4fdbe3df085b7de5ff51cca91699ea51f55\": container with ID starting with 6fcdcc1d3637b595c58a13d00649f4fdbe3df085b7de5ff51cca91699ea51f55 not found: ID does not exist" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.245889 4923 scope.go:117] "RemoveContainer" containerID="6ae4fdd694156a19a6d068ca04fe35f38352c3323a4b8bcfceac06e5be8879b3" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.246136 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ae4fdd694156a19a6d068ca04fe35f38352c3323a4b8bcfceac06e5be8879b3"} err="failed to get container status \"6ae4fdd694156a19a6d068ca04fe35f38352c3323a4b8bcfceac06e5be8879b3\": rpc error: code = NotFound desc = could not find container \"6ae4fdd694156a19a6d068ca04fe35f38352c3323a4b8bcfceac06e5be8879b3\": container with ID starting with 6ae4fdd694156a19a6d068ca04fe35f38352c3323a4b8bcfceac06e5be8879b3 not found: ID does not exist" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.246152 4923 scope.go:117] "RemoveContainer" containerID="1eccbe980bac77244008c06eac3d1bad7762baa0fc1947c6df9f95fce761f647" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.246461 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1eccbe980bac77244008c06eac3d1bad7762baa0fc1947c6df9f95fce761f647"} err="failed to get container status \"1eccbe980bac77244008c06eac3d1bad7762baa0fc1947c6df9f95fce761f647\": rpc error: code = NotFound desc = could not find container \"1eccbe980bac77244008c06eac3d1bad7762baa0fc1947c6df9f95fce761f647\": container with ID starting with 1eccbe980bac77244008c06eac3d1bad7762baa0fc1947c6df9f95fce761f647 not found: ID does not exist" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.246502 4923 scope.go:117] "RemoveContainer" containerID="6dac85e489ca5901df3ed56b84db84a1eeea9bc2154ff5a398ce00c5f0edf661" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.246761 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dac85e489ca5901df3ed56b84db84a1eeea9bc2154ff5a398ce00c5f0edf661"} err="failed to get container status \"6dac85e489ca5901df3ed56b84db84a1eeea9bc2154ff5a398ce00c5f0edf661\": rpc error: code = NotFound desc = could not find container \"6dac85e489ca5901df3ed56b84db84a1eeea9bc2154ff5a398ce00c5f0edf661\": container with ID starting with 6dac85e489ca5901df3ed56b84db84a1eeea9bc2154ff5a398ce00c5f0edf661 not found: ID does not exist" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.246781 4923 scope.go:117] "RemoveContainer" containerID="678d94e877125f7c10a246da944ce794e58c872e87b8a5ca503a7b6bc22d9db6" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.247027 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"678d94e877125f7c10a246da944ce794e58c872e87b8a5ca503a7b6bc22d9db6"} err="failed to get container status \"678d94e877125f7c10a246da944ce794e58c872e87b8a5ca503a7b6bc22d9db6\": rpc error: code = NotFound desc = could not find container \"678d94e877125f7c10a246da944ce794e58c872e87b8a5ca503a7b6bc22d9db6\": container with ID starting with 678d94e877125f7c10a246da944ce794e58c872e87b8a5ca503a7b6bc22d9db6 not found: ID does not exist" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.247054 4923 scope.go:117] "RemoveContainer" containerID="5f2b5b675bef5dc6b9f41659e2d5874dfcf3f532b758c9814c29856752570d6e" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.247278 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f2b5b675bef5dc6b9f41659e2d5874dfcf3f532b758c9814c29856752570d6e"} err="failed to get container status \"5f2b5b675bef5dc6b9f41659e2d5874dfcf3f532b758c9814c29856752570d6e\": rpc error: code = NotFound desc = could not find container \"5f2b5b675bef5dc6b9f41659e2d5874dfcf3f532b758c9814c29856752570d6e\": container with ID starting with 5f2b5b675bef5dc6b9f41659e2d5874dfcf3f532b758c9814c29856752570d6e not found: ID does not exist" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.247299 4923 scope.go:117] "RemoveContainer" containerID="87832ecbdcba5c1b91eaf711730aa0ff33ffc94ed2611ed9abcde3b0fedd3aba" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.247508 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87832ecbdcba5c1b91eaf711730aa0ff33ffc94ed2611ed9abcde3b0fedd3aba"} err="failed to get container status \"87832ecbdcba5c1b91eaf711730aa0ff33ffc94ed2611ed9abcde3b0fedd3aba\": rpc error: code = NotFound desc = could not find container \"87832ecbdcba5c1b91eaf711730aa0ff33ffc94ed2611ed9abcde3b0fedd3aba\": container with ID starting with 87832ecbdcba5c1b91eaf711730aa0ff33ffc94ed2611ed9abcde3b0fedd3aba not found: ID does not exist" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.247529 4923 scope.go:117] "RemoveContainer" containerID="c1ea8207610413fd41a1bf8001be27bde97c42c2b9cd8be8e11f0407a10630f3" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.247720 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ea8207610413fd41a1bf8001be27bde97c42c2b9cd8be8e11f0407a10630f3"} err="failed to get container status \"c1ea8207610413fd41a1bf8001be27bde97c42c2b9cd8be8e11f0407a10630f3\": rpc error: code = NotFound desc = could not find container \"c1ea8207610413fd41a1bf8001be27bde97c42c2b9cd8be8e11f0407a10630f3\": container with ID starting with c1ea8207610413fd41a1bf8001be27bde97c42c2b9cd8be8e11f0407a10630f3 not found: ID does not exist" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.247740 4923 scope.go:117] "RemoveContainer" containerID="d920fcdc6374ceb79d6f8ea705e06a52886603b2645de0f3a50220d08af1ccb4" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.247947 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d920fcdc6374ceb79d6f8ea705e06a52886603b2645de0f3a50220d08af1ccb4"} err="failed to get container status \"d920fcdc6374ceb79d6f8ea705e06a52886603b2645de0f3a50220d08af1ccb4\": rpc error: code = NotFound desc = could not find container \"d920fcdc6374ceb79d6f8ea705e06a52886603b2645de0f3a50220d08af1ccb4\": container with ID starting with d920fcdc6374ceb79d6f8ea705e06a52886603b2645de0f3a50220d08af1ccb4 not found: ID does not exist" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.247991 4923 scope.go:117] "RemoveContainer" containerID="6fcdcc1d3637b595c58a13d00649f4fdbe3df085b7de5ff51cca91699ea51f55" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.248210 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fcdcc1d3637b595c58a13d00649f4fdbe3df085b7de5ff51cca91699ea51f55"} err="failed to get container status \"6fcdcc1d3637b595c58a13d00649f4fdbe3df085b7de5ff51cca91699ea51f55\": rpc error: code = NotFound desc = could not find container \"6fcdcc1d3637b595c58a13d00649f4fdbe3df085b7de5ff51cca91699ea51f55\": container with ID starting with 6fcdcc1d3637b595c58a13d00649f4fdbe3df085b7de5ff51cca91699ea51f55 not found: ID does not exist" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.248231 4923 scope.go:117] "RemoveContainer" containerID="6ae4fdd694156a19a6d068ca04fe35f38352c3323a4b8bcfceac06e5be8879b3" Mar 21 04:29:53 crc kubenswrapper[4923]: I0321 04:29:53.248451 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ae4fdd694156a19a6d068ca04fe35f38352c3323a4b8bcfceac06e5be8879b3"} err="failed to get container status \"6ae4fdd694156a19a6d068ca04fe35f38352c3323a4b8bcfceac06e5be8879b3\": rpc error: code = NotFound desc = could not find container \"6ae4fdd694156a19a6d068ca04fe35f38352c3323a4b8bcfceac06e5be8879b3\": container with ID starting with 6ae4fdd694156a19a6d068ca04fe35f38352c3323a4b8bcfceac06e5be8879b3 not found: ID does not exist" Mar 21 04:29:54 crc kubenswrapper[4923]: I0321 04:29:54.008585 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" event={"ID":"ec3c3ace-e26b-4a78-9f67-c02cea959a03","Type":"ContainerStarted","Data":"bc3c80a4dd97562a0d9ca65bbe31e5c8d6b4e346b9d1d4a30291695fd19ed8ed"} Mar 21 04:29:54 crc kubenswrapper[4923]: I0321 04:29:54.008941 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" event={"ID":"ec3c3ace-e26b-4a78-9f67-c02cea959a03","Type":"ContainerStarted","Data":"14d6113795ee379457d8c1cb23f4a9b1548ce431cc6c278062f4b1367904d12d"} Mar 21 04:29:54 crc kubenswrapper[4923]: I0321 04:29:54.008965 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" event={"ID":"ec3c3ace-e26b-4a78-9f67-c02cea959a03","Type":"ContainerStarted","Data":"33ee6ab531440fcb9d0cecabcf389c578564c98432cafe659fc1a41c3a7c45a2"} Mar 21 04:29:54 crc kubenswrapper[4923]: I0321 04:29:54.008984 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" event={"ID":"ec3c3ace-e26b-4a78-9f67-c02cea959a03","Type":"ContainerStarted","Data":"52c3025b1be85acdb29c485eef9fe092e4bb40163def3f844f8759436b36d14d"} Mar 21 04:29:54 crc kubenswrapper[4923]: I0321 04:29:54.008999 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" event={"ID":"ec3c3ace-e26b-4a78-9f67-c02cea959a03","Type":"ContainerStarted","Data":"676420bb4c16c2452103d098498248f9b344b13b0e40fc9a3f8f631aad53f973"} Mar 21 04:29:54 crc kubenswrapper[4923]: I0321 04:29:54.009017 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" event={"ID":"ec3c3ace-e26b-4a78-9f67-c02cea959a03","Type":"ContainerStarted","Data":"4be844a549079e72b7f5bf267239238535510b5513083c3fd14846ffd8853b37"} Mar 21 04:29:54 crc kubenswrapper[4923]: I0321 04:29:54.370398 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95d61f2a-3f56-4d98-a1df-384973815163" path="/var/lib/kubelet/pods/95d61f2a-3f56-4d98-a1df-384973815163/volumes" Mar 21 04:29:56 crc kubenswrapper[4923]: I0321 04:29:56.035631 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" event={"ID":"ec3c3ace-e26b-4a78-9f67-c02cea959a03","Type":"ContainerStarted","Data":"2a5ee6a9891b16ac0a2edcf6698be2411ef01b2c230cc8f00bbf07255082277e"} Mar 21 04:29:59 crc kubenswrapper[4923]: I0321 04:29:59.065864 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" event={"ID":"ec3c3ace-e26b-4a78-9f67-c02cea959a03","Type":"ContainerStarted","Data":"073dd860d08a33a5ddcbc5351e6e8cfdde868b3c6d2444517ba123a8d6108668"} Mar 21 04:29:59 crc kubenswrapper[4923]: I0321 04:29:59.066443 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:59 crc kubenswrapper[4923]: I0321 04:29:59.066472 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:59 crc kubenswrapper[4923]: I0321 04:29:59.066486 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:59 crc kubenswrapper[4923]: I0321 04:29:59.097655 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" podStartSLOduration=7.097636179 podStartE2EDuration="7.097636179s" podCreationTimestamp="2026-03-21 04:29:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:29:59.094495768 +0000 UTC m=+764.247506895" watchObservedRunningTime="2026-03-21 04:29:59.097636179 +0000 UTC m=+764.250647276" Mar 21 04:29:59 crc kubenswrapper[4923]: I0321 04:29:59.106194 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:29:59 crc kubenswrapper[4923]: I0321 04:29:59.115660 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:30:00 crc kubenswrapper[4923]: I0321 04:30:00.140816 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567790-qlwn4"] Mar 21 04:30:00 crc kubenswrapper[4923]: I0321 04:30:00.142357 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567790-qlwn4" Mar 21 04:30:00 crc kubenswrapper[4923]: I0321 04:30:00.144008 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt"] Mar 21 04:30:00 crc kubenswrapper[4923]: I0321 04:30:00.144610 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt" Mar 21 04:30:00 crc kubenswrapper[4923]: I0321 04:30:00.144846 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8trfl" Mar 21 04:30:00 crc kubenswrapper[4923]: I0321 04:30:00.145123 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 04:30:00 crc kubenswrapper[4923]: I0321 04:30:00.149098 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 21 04:30:00 crc kubenswrapper[4923]: I0321 04:30:00.149138 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 21 04:30:00 crc kubenswrapper[4923]: I0321 04:30:00.153444 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567790-qlwn4"] Mar 21 04:30:00 crc kubenswrapper[4923]: I0321 04:30:00.154726 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 04:30:00 crc kubenswrapper[4923]: I0321 04:30:00.166668 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt"] Mar 21 04:30:00 crc kubenswrapper[4923]: I0321 04:30:00.278395 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/405a0885-e7d1-4379-8a17-5e880c97d04a-config-volume\") pod \"collect-profiles-29567790-nw5wt\" (UID: \"405a0885-e7d1-4379-8a17-5e880c97d04a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt" Mar 21 04:30:00 crc kubenswrapper[4923]: I0321 04:30:00.278449 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76zg6\" (UniqueName: \"kubernetes.io/projected/1bf8a628-e7fd-4829-b620-f1bbea8efd52-kube-api-access-76zg6\") pod \"auto-csr-approver-29567790-qlwn4\" (UID: \"1bf8a628-e7fd-4829-b620-f1bbea8efd52\") " pod="openshift-infra/auto-csr-approver-29567790-qlwn4" Mar 21 04:30:00 crc kubenswrapper[4923]: I0321 04:30:00.278474 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvdtc\" (UniqueName: \"kubernetes.io/projected/405a0885-e7d1-4379-8a17-5e880c97d04a-kube-api-access-jvdtc\") pod \"collect-profiles-29567790-nw5wt\" (UID: \"405a0885-e7d1-4379-8a17-5e880c97d04a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt" Mar 21 04:30:00 crc kubenswrapper[4923]: I0321 04:30:00.278675 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/405a0885-e7d1-4379-8a17-5e880c97d04a-secret-volume\") pod \"collect-profiles-29567790-nw5wt\" (UID: \"405a0885-e7d1-4379-8a17-5e880c97d04a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt" Mar 21 04:30:00 crc kubenswrapper[4923]: I0321 04:30:00.379312 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/405a0885-e7d1-4379-8a17-5e880c97d04a-secret-volume\") pod \"collect-profiles-29567790-nw5wt\" (UID: \"405a0885-e7d1-4379-8a17-5e880c97d04a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt" Mar 21 04:30:00 crc kubenswrapper[4923]: I0321 04:30:00.379390 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/405a0885-e7d1-4379-8a17-5e880c97d04a-config-volume\") pod \"collect-profiles-29567790-nw5wt\" (UID: \"405a0885-e7d1-4379-8a17-5e880c97d04a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt" Mar 21 04:30:00 crc kubenswrapper[4923]: I0321 04:30:00.379412 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76zg6\" (UniqueName: \"kubernetes.io/projected/1bf8a628-e7fd-4829-b620-f1bbea8efd52-kube-api-access-76zg6\") pod \"auto-csr-approver-29567790-qlwn4\" (UID: \"1bf8a628-e7fd-4829-b620-f1bbea8efd52\") " pod="openshift-infra/auto-csr-approver-29567790-qlwn4" Mar 21 04:30:00 crc kubenswrapper[4923]: I0321 04:30:00.379439 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvdtc\" (UniqueName: \"kubernetes.io/projected/405a0885-e7d1-4379-8a17-5e880c97d04a-kube-api-access-jvdtc\") pod \"collect-profiles-29567790-nw5wt\" (UID: \"405a0885-e7d1-4379-8a17-5e880c97d04a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt" Mar 21 04:30:00 crc kubenswrapper[4923]: I0321 04:30:00.380535 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/405a0885-e7d1-4379-8a17-5e880c97d04a-config-volume\") pod \"collect-profiles-29567790-nw5wt\" (UID: \"405a0885-e7d1-4379-8a17-5e880c97d04a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt" Mar 21 04:30:00 crc kubenswrapper[4923]: I0321 04:30:00.384473 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/405a0885-e7d1-4379-8a17-5e880c97d04a-secret-volume\") pod \"collect-profiles-29567790-nw5wt\" (UID: \"405a0885-e7d1-4379-8a17-5e880c97d04a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt" Mar 21 04:30:00 crc kubenswrapper[4923]: I0321 04:30:00.395762 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76zg6\" (UniqueName: \"kubernetes.io/projected/1bf8a628-e7fd-4829-b620-f1bbea8efd52-kube-api-access-76zg6\") pod \"auto-csr-approver-29567790-qlwn4\" (UID: \"1bf8a628-e7fd-4829-b620-f1bbea8efd52\") " pod="openshift-infra/auto-csr-approver-29567790-qlwn4" Mar 21 04:30:00 crc kubenswrapper[4923]: I0321 04:30:00.402964 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvdtc\" (UniqueName: \"kubernetes.io/projected/405a0885-e7d1-4379-8a17-5e880c97d04a-kube-api-access-jvdtc\") pod \"collect-profiles-29567790-nw5wt\" (UID: \"405a0885-e7d1-4379-8a17-5e880c97d04a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt" Mar 21 04:30:00 crc kubenswrapper[4923]: I0321 04:30:00.473351 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567790-qlwn4" Mar 21 04:30:00 crc kubenswrapper[4923]: I0321 04:30:00.485662 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt" Mar 21 04:30:00 crc kubenswrapper[4923]: E0321 04:30:00.503070 4923 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29567790-qlwn4_openshift-infra_1bf8a628-e7fd-4829-b620-f1bbea8efd52_0(185a40c2f9bbe1d64977ded14e338d622e54d6681aeded2ca857f3e50f8f095e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 04:30:00 crc kubenswrapper[4923]: E0321 04:30:00.503158 4923 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29567790-qlwn4_openshift-infra_1bf8a628-e7fd-4829-b620-f1bbea8efd52_0(185a40c2f9bbe1d64977ded14e338d622e54d6681aeded2ca857f3e50f8f095e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29567790-qlwn4" Mar 21 04:30:00 crc kubenswrapper[4923]: E0321 04:30:00.503180 4923 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29567790-qlwn4_openshift-infra_1bf8a628-e7fd-4829-b620-f1bbea8efd52_0(185a40c2f9bbe1d64977ded14e338d622e54d6681aeded2ca857f3e50f8f095e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29567790-qlwn4" Mar 21 04:30:00 crc kubenswrapper[4923]: E0321 04:30:00.503233 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"auto-csr-approver-29567790-qlwn4_openshift-infra(1bf8a628-e7fd-4829-b620-f1bbea8efd52)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"auto-csr-approver-29567790-qlwn4_openshift-infra(1bf8a628-e7fd-4829-b620-f1bbea8efd52)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29567790-qlwn4_openshift-infra_1bf8a628-e7fd-4829-b620-f1bbea8efd52_0(185a40c2f9bbe1d64977ded14e338d622e54d6681aeded2ca857f3e50f8f095e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-infra/auto-csr-approver-29567790-qlwn4" podUID="1bf8a628-e7fd-4829-b620-f1bbea8efd52" Mar 21 04:30:00 crc kubenswrapper[4923]: E0321 04:30:00.516479 4923 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29567790-nw5wt_openshift-operator-lifecycle-manager_405a0885-e7d1-4379-8a17-5e880c97d04a_0(9a24b501df687850b7ca7fe341f31ac1e7dd8b320941e3cd195886a9b85c4175): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 04:30:00 crc kubenswrapper[4923]: E0321 04:30:00.516544 4923 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29567790-nw5wt_openshift-operator-lifecycle-manager_405a0885-e7d1-4379-8a17-5e880c97d04a_0(9a24b501df687850b7ca7fe341f31ac1e7dd8b320941e3cd195886a9b85c4175): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt" Mar 21 04:30:00 crc kubenswrapper[4923]: E0321 04:30:00.516572 4923 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29567790-nw5wt_openshift-operator-lifecycle-manager_405a0885-e7d1-4379-8a17-5e880c97d04a_0(9a24b501df687850b7ca7fe341f31ac1e7dd8b320941e3cd195886a9b85c4175): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt" Mar 21 04:30:00 crc kubenswrapper[4923]: E0321 04:30:00.516689 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29567790-nw5wt_openshift-operator-lifecycle-manager(405a0885-e7d1-4379-8a17-5e880c97d04a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29567790-nw5wt_openshift-operator-lifecycle-manager(405a0885-e7d1-4379-8a17-5e880c97d04a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29567790-nw5wt_openshift-operator-lifecycle-manager_405a0885-e7d1-4379-8a17-5e880c97d04a_0(9a24b501df687850b7ca7fe341f31ac1e7dd8b320941e3cd195886a9b85c4175): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt" podUID="405a0885-e7d1-4379-8a17-5e880c97d04a" Mar 21 04:30:01 crc kubenswrapper[4923]: I0321 04:30:01.082713 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567790-qlwn4" Mar 21 04:30:01 crc kubenswrapper[4923]: I0321 04:30:01.082728 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt" Mar 21 04:30:01 crc kubenswrapper[4923]: I0321 04:30:01.083238 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567790-qlwn4" Mar 21 04:30:01 crc kubenswrapper[4923]: I0321 04:30:01.083303 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt" Mar 21 04:30:01 crc kubenswrapper[4923]: E0321 04:30:01.119686 4923 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29567790-qlwn4_openshift-infra_1bf8a628-e7fd-4829-b620-f1bbea8efd52_0(d766bc948bc177acf8db1cd617ce7f8e6310c408265a7c949d73aa02d3c8a5c1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 04:30:01 crc kubenswrapper[4923]: E0321 04:30:01.119766 4923 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29567790-qlwn4_openshift-infra_1bf8a628-e7fd-4829-b620-f1bbea8efd52_0(d766bc948bc177acf8db1cd617ce7f8e6310c408265a7c949d73aa02d3c8a5c1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29567790-qlwn4" Mar 21 04:30:01 crc kubenswrapper[4923]: E0321 04:30:01.119791 4923 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29567790-qlwn4_openshift-infra_1bf8a628-e7fd-4829-b620-f1bbea8efd52_0(d766bc948bc177acf8db1cd617ce7f8e6310c408265a7c949d73aa02d3c8a5c1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29567790-qlwn4" Mar 21 04:30:01 crc kubenswrapper[4923]: E0321 04:30:01.119848 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"auto-csr-approver-29567790-qlwn4_openshift-infra(1bf8a628-e7fd-4829-b620-f1bbea8efd52)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"auto-csr-approver-29567790-qlwn4_openshift-infra(1bf8a628-e7fd-4829-b620-f1bbea8efd52)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29567790-qlwn4_openshift-infra_1bf8a628-e7fd-4829-b620-f1bbea8efd52_0(d766bc948bc177acf8db1cd617ce7f8e6310c408265a7c949d73aa02d3c8a5c1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-infra/auto-csr-approver-29567790-qlwn4" podUID="1bf8a628-e7fd-4829-b620-f1bbea8efd52" Mar 21 04:30:01 crc kubenswrapper[4923]: E0321 04:30:01.135004 4923 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29567790-nw5wt_openshift-operator-lifecycle-manager_405a0885-e7d1-4379-8a17-5e880c97d04a_0(a3b3561e8f9e06dbd3dc42fe61f6d665d447745e8a32ca808125e1df1c993a69): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 21 04:30:01 crc kubenswrapper[4923]: E0321 04:30:01.135376 4923 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29567790-nw5wt_openshift-operator-lifecycle-manager_405a0885-e7d1-4379-8a17-5e880c97d04a_0(a3b3561e8f9e06dbd3dc42fe61f6d665d447745e8a32ca808125e1df1c993a69): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt" Mar 21 04:30:01 crc kubenswrapper[4923]: E0321 04:30:01.135404 4923 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29567790-nw5wt_openshift-operator-lifecycle-manager_405a0885-e7d1-4379-8a17-5e880c97d04a_0(a3b3561e8f9e06dbd3dc42fe61f6d665d447745e8a32ca808125e1df1c993a69): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt" Mar 21 04:30:01 crc kubenswrapper[4923]: E0321 04:30:01.135467 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29567790-nw5wt_openshift-operator-lifecycle-manager(405a0885-e7d1-4379-8a17-5e880c97d04a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29567790-nw5wt_openshift-operator-lifecycle-manager(405a0885-e7d1-4379-8a17-5e880c97d04a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29567790-nw5wt_openshift-operator-lifecycle-manager_405a0885-e7d1-4379-8a17-5e880c97d04a_0(a3b3561e8f9e06dbd3dc42fe61f6d665d447745e8a32ca808125e1df1c993a69): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt" podUID="405a0885-e7d1-4379-8a17-5e880c97d04a" Mar 21 04:30:03 crc kubenswrapper[4923]: I0321 04:30:03.236043 4923 patch_prober.go:28] interesting pod/machine-config-daemon-cv5gr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 04:30:03 crc kubenswrapper[4923]: I0321 04:30:03.236222 4923 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 04:30:05 crc kubenswrapper[4923]: I0321 04:30:05.975085 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9"] Mar 21 04:30:05 crc kubenswrapper[4923]: I0321 04:30:05.978415 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9" Mar 21 04:30:05 crc kubenswrapper[4923]: I0321 04:30:05.982998 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 21 04:30:06 crc kubenswrapper[4923]: I0321 04:30:06.029428 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9"] Mar 21 04:30:06 crc kubenswrapper[4923]: I0321 04:30:06.070586 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxgcz\" (UniqueName: \"kubernetes.io/projected/5079aa2d-ce9f-4e98-bc7b-48fcb327a98f-kube-api-access-qxgcz\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9\" (UID: \"5079aa2d-ce9f-4e98-bc7b-48fcb327a98f\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9" Mar 21 04:30:06 crc kubenswrapper[4923]: I0321 04:30:06.070641 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5079aa2d-ce9f-4e98-bc7b-48fcb327a98f-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9\" (UID: \"5079aa2d-ce9f-4e98-bc7b-48fcb327a98f\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9" Mar 21 04:30:06 crc kubenswrapper[4923]: I0321 04:30:06.070678 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5079aa2d-ce9f-4e98-bc7b-48fcb327a98f-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9\" (UID: \"5079aa2d-ce9f-4e98-bc7b-48fcb327a98f\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9" Mar 21 04:30:06 crc kubenswrapper[4923]: I0321 04:30:06.171900 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxgcz\" (UniqueName: \"kubernetes.io/projected/5079aa2d-ce9f-4e98-bc7b-48fcb327a98f-kube-api-access-qxgcz\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9\" (UID: \"5079aa2d-ce9f-4e98-bc7b-48fcb327a98f\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9" Mar 21 04:30:06 crc kubenswrapper[4923]: I0321 04:30:06.172265 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5079aa2d-ce9f-4e98-bc7b-48fcb327a98f-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9\" (UID: \"5079aa2d-ce9f-4e98-bc7b-48fcb327a98f\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9" Mar 21 04:30:06 crc kubenswrapper[4923]: I0321 04:30:06.172317 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5079aa2d-ce9f-4e98-bc7b-48fcb327a98f-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9\" (UID: \"5079aa2d-ce9f-4e98-bc7b-48fcb327a98f\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9" Mar 21 04:30:06 crc kubenswrapper[4923]: I0321 04:30:06.172924 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5079aa2d-ce9f-4e98-bc7b-48fcb327a98f-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9\" (UID: \"5079aa2d-ce9f-4e98-bc7b-48fcb327a98f\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9" Mar 21 04:30:06 crc kubenswrapper[4923]: I0321 04:30:06.173070 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5079aa2d-ce9f-4e98-bc7b-48fcb327a98f-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9\" (UID: \"5079aa2d-ce9f-4e98-bc7b-48fcb327a98f\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9" Mar 21 04:30:06 crc kubenswrapper[4923]: I0321 04:30:06.192832 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxgcz\" (UniqueName: \"kubernetes.io/projected/5079aa2d-ce9f-4e98-bc7b-48fcb327a98f-kube-api-access-qxgcz\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9\" (UID: \"5079aa2d-ce9f-4e98-bc7b-48fcb327a98f\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9" Mar 21 04:30:06 crc kubenswrapper[4923]: I0321 04:30:06.354029 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9" Mar 21 04:30:06 crc kubenswrapper[4923]: I0321 04:30:06.631770 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9"] Mar 21 04:30:06 crc kubenswrapper[4923]: W0321 04:30:06.638381 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5079aa2d_ce9f_4e98_bc7b_48fcb327a98f.slice/crio-31443f61c67147b79e4149910e3450adf7217eda965ab4e99a3d0d8ed4f23e28 WatchSource:0}: Error finding container 31443f61c67147b79e4149910e3450adf7217eda965ab4e99a3d0d8ed4f23e28: Status 404 returned error can't find the container with id 31443f61c67147b79e4149910e3450adf7217eda965ab4e99a3d0d8ed4f23e28 Mar 21 04:30:07 crc kubenswrapper[4923]: I0321 04:30:07.120993 4923 generic.go:334] "Generic (PLEG): container finished" podID="5079aa2d-ce9f-4e98-bc7b-48fcb327a98f" containerID="b5d4fa3a69a7df27c0620a4ad6f55ef5381ae462b0fbc6b43fcd50c3b8094505" exitCode=0 Mar 21 04:30:07 crc kubenswrapper[4923]: I0321 04:30:07.121042 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9" event={"ID":"5079aa2d-ce9f-4e98-bc7b-48fcb327a98f","Type":"ContainerDied","Data":"b5d4fa3a69a7df27c0620a4ad6f55ef5381ae462b0fbc6b43fcd50c3b8094505"} Mar 21 04:30:07 crc kubenswrapper[4923]: I0321 04:30:07.121128 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9" event={"ID":"5079aa2d-ce9f-4e98-bc7b-48fcb327a98f","Type":"ContainerStarted","Data":"31443f61c67147b79e4149910e3450adf7217eda965ab4e99a3d0d8ed4f23e28"} Mar 21 04:30:08 crc kubenswrapper[4923]: I0321 04:30:08.311487 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zpvzx"] Mar 21 04:30:08 crc kubenswrapper[4923]: I0321 04:30:08.315910 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zpvzx" Mar 21 04:30:08 crc kubenswrapper[4923]: I0321 04:30:08.324464 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zpvzx"] Mar 21 04:30:08 crc kubenswrapper[4923]: I0321 04:30:08.405814 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce5a383c-7a3e-448e-b930-05e9aa04558e-catalog-content\") pod \"redhat-operators-zpvzx\" (UID: \"ce5a383c-7a3e-448e-b930-05e9aa04558e\") " pod="openshift-marketplace/redhat-operators-zpvzx" Mar 21 04:30:08 crc kubenswrapper[4923]: I0321 04:30:08.405882 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce5a383c-7a3e-448e-b930-05e9aa04558e-utilities\") pod \"redhat-operators-zpvzx\" (UID: \"ce5a383c-7a3e-448e-b930-05e9aa04558e\") " pod="openshift-marketplace/redhat-operators-zpvzx" Mar 21 04:30:08 crc kubenswrapper[4923]: I0321 04:30:08.405938 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb7sl\" (UniqueName: \"kubernetes.io/projected/ce5a383c-7a3e-448e-b930-05e9aa04558e-kube-api-access-xb7sl\") pod \"redhat-operators-zpvzx\" (UID: \"ce5a383c-7a3e-448e-b930-05e9aa04558e\") " pod="openshift-marketplace/redhat-operators-zpvzx" Mar 21 04:30:08 crc kubenswrapper[4923]: I0321 04:30:08.507576 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce5a383c-7a3e-448e-b930-05e9aa04558e-catalog-content\") pod \"redhat-operators-zpvzx\" (UID: \"ce5a383c-7a3e-448e-b930-05e9aa04558e\") " pod="openshift-marketplace/redhat-operators-zpvzx" Mar 21 04:30:08 crc kubenswrapper[4923]: I0321 04:30:08.507769 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce5a383c-7a3e-448e-b930-05e9aa04558e-utilities\") pod \"redhat-operators-zpvzx\" (UID: \"ce5a383c-7a3e-448e-b930-05e9aa04558e\") " pod="openshift-marketplace/redhat-operators-zpvzx" Mar 21 04:30:08 crc kubenswrapper[4923]: I0321 04:30:08.508376 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce5a383c-7a3e-448e-b930-05e9aa04558e-catalog-content\") pod \"redhat-operators-zpvzx\" (UID: \"ce5a383c-7a3e-448e-b930-05e9aa04558e\") " pod="openshift-marketplace/redhat-operators-zpvzx" Mar 21 04:30:08 crc kubenswrapper[4923]: I0321 04:30:08.508690 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce5a383c-7a3e-448e-b930-05e9aa04558e-utilities\") pod \"redhat-operators-zpvzx\" (UID: \"ce5a383c-7a3e-448e-b930-05e9aa04558e\") " pod="openshift-marketplace/redhat-operators-zpvzx" Mar 21 04:30:08 crc kubenswrapper[4923]: I0321 04:30:08.508979 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb7sl\" (UniqueName: \"kubernetes.io/projected/ce5a383c-7a3e-448e-b930-05e9aa04558e-kube-api-access-xb7sl\") pod \"redhat-operators-zpvzx\" (UID: \"ce5a383c-7a3e-448e-b930-05e9aa04558e\") " pod="openshift-marketplace/redhat-operators-zpvzx" Mar 21 04:30:08 crc kubenswrapper[4923]: I0321 04:30:08.536525 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb7sl\" (UniqueName: \"kubernetes.io/projected/ce5a383c-7a3e-448e-b930-05e9aa04558e-kube-api-access-xb7sl\") pod \"redhat-operators-zpvzx\" (UID: \"ce5a383c-7a3e-448e-b930-05e9aa04558e\") " pod="openshift-marketplace/redhat-operators-zpvzx" Mar 21 04:30:08 crc kubenswrapper[4923]: I0321 04:30:08.660495 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zpvzx" Mar 21 04:30:08 crc kubenswrapper[4923]: I0321 04:30:08.883460 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zpvzx"] Mar 21 04:30:09 crc kubenswrapper[4923]: I0321 04:30:09.141735 4923 generic.go:334] "Generic (PLEG): container finished" podID="5079aa2d-ce9f-4e98-bc7b-48fcb327a98f" containerID="6a9ec8c398d94b936d3d8b4631990db1302b251377fed40c564f0b733f2e7be5" exitCode=0 Mar 21 04:30:09 crc kubenswrapper[4923]: I0321 04:30:09.141847 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9" event={"ID":"5079aa2d-ce9f-4e98-bc7b-48fcb327a98f","Type":"ContainerDied","Data":"6a9ec8c398d94b936d3d8b4631990db1302b251377fed40c564f0b733f2e7be5"} Mar 21 04:30:09 crc kubenswrapper[4923]: I0321 04:30:09.143366 4923 generic.go:334] "Generic (PLEG): container finished" podID="ce5a383c-7a3e-448e-b930-05e9aa04558e" containerID="fa2fe1bc0b05651e8c7a85f7ff3d97ec270fc9cff351d94ad82989e238b94b2c" exitCode=0 Mar 21 04:30:09 crc kubenswrapper[4923]: I0321 04:30:09.143418 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpvzx" event={"ID":"ce5a383c-7a3e-448e-b930-05e9aa04558e","Type":"ContainerDied","Data":"fa2fe1bc0b05651e8c7a85f7ff3d97ec270fc9cff351d94ad82989e238b94b2c"} Mar 21 04:30:09 crc kubenswrapper[4923]: I0321 04:30:09.143448 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpvzx" event={"ID":"ce5a383c-7a3e-448e-b930-05e9aa04558e","Type":"ContainerStarted","Data":"60962e67d2c3a64e61f8451b3cafbf29996aeb936aadccae4464793934323183"} Mar 21 04:30:10 crc kubenswrapper[4923]: I0321 04:30:10.153064 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpvzx" event={"ID":"ce5a383c-7a3e-448e-b930-05e9aa04558e","Type":"ContainerStarted","Data":"cb9b701dc95cab96acf256ca0a5c221aa8797427d0c2d021de1ab860e1334ae0"} Mar 21 04:30:10 crc kubenswrapper[4923]: I0321 04:30:10.156627 4923 generic.go:334] "Generic (PLEG): container finished" podID="5079aa2d-ce9f-4e98-bc7b-48fcb327a98f" containerID="20741c441f3233b57ce19d3188589c827996345836b3e7fda5a96c45d80dd52e" exitCode=0 Mar 21 04:30:10 crc kubenswrapper[4923]: I0321 04:30:10.156702 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9" event={"ID":"5079aa2d-ce9f-4e98-bc7b-48fcb327a98f","Type":"ContainerDied","Data":"20741c441f3233b57ce19d3188589c827996345836b3e7fda5a96c45d80dd52e"} Mar 21 04:30:11 crc kubenswrapper[4923]: I0321 04:30:11.168795 4923 generic.go:334] "Generic (PLEG): container finished" podID="ce5a383c-7a3e-448e-b930-05e9aa04558e" containerID="cb9b701dc95cab96acf256ca0a5c221aa8797427d0c2d021de1ab860e1334ae0" exitCode=0 Mar 21 04:30:11 crc kubenswrapper[4923]: I0321 04:30:11.168881 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpvzx" event={"ID":"ce5a383c-7a3e-448e-b930-05e9aa04558e","Type":"ContainerDied","Data":"cb9b701dc95cab96acf256ca0a5c221aa8797427d0c2d021de1ab860e1334ae0"} Mar 21 04:30:11 crc kubenswrapper[4923]: I0321 04:30:11.524860 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9" Mar 21 04:30:11 crc kubenswrapper[4923]: I0321 04:30:11.673646 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5079aa2d-ce9f-4e98-bc7b-48fcb327a98f-util\") pod \"5079aa2d-ce9f-4e98-bc7b-48fcb327a98f\" (UID: \"5079aa2d-ce9f-4e98-bc7b-48fcb327a98f\") " Mar 21 04:30:11 crc kubenswrapper[4923]: I0321 04:30:11.673802 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxgcz\" (UniqueName: \"kubernetes.io/projected/5079aa2d-ce9f-4e98-bc7b-48fcb327a98f-kube-api-access-qxgcz\") pod \"5079aa2d-ce9f-4e98-bc7b-48fcb327a98f\" (UID: \"5079aa2d-ce9f-4e98-bc7b-48fcb327a98f\") " Mar 21 04:30:11 crc kubenswrapper[4923]: I0321 04:30:11.673930 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5079aa2d-ce9f-4e98-bc7b-48fcb327a98f-bundle\") pod \"5079aa2d-ce9f-4e98-bc7b-48fcb327a98f\" (UID: \"5079aa2d-ce9f-4e98-bc7b-48fcb327a98f\") " Mar 21 04:30:11 crc kubenswrapper[4923]: I0321 04:30:11.675100 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5079aa2d-ce9f-4e98-bc7b-48fcb327a98f-bundle" (OuterVolumeSpecName: "bundle") pod "5079aa2d-ce9f-4e98-bc7b-48fcb327a98f" (UID: "5079aa2d-ce9f-4e98-bc7b-48fcb327a98f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:30:11 crc kubenswrapper[4923]: I0321 04:30:11.683743 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5079aa2d-ce9f-4e98-bc7b-48fcb327a98f-kube-api-access-qxgcz" (OuterVolumeSpecName: "kube-api-access-qxgcz") pod "5079aa2d-ce9f-4e98-bc7b-48fcb327a98f" (UID: "5079aa2d-ce9f-4e98-bc7b-48fcb327a98f"). InnerVolumeSpecName "kube-api-access-qxgcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:30:11 crc kubenswrapper[4923]: I0321 04:30:11.706953 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5079aa2d-ce9f-4e98-bc7b-48fcb327a98f-util" (OuterVolumeSpecName: "util") pod "5079aa2d-ce9f-4e98-bc7b-48fcb327a98f" (UID: "5079aa2d-ce9f-4e98-bc7b-48fcb327a98f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:30:11 crc kubenswrapper[4923]: I0321 04:30:11.776143 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxgcz\" (UniqueName: \"kubernetes.io/projected/5079aa2d-ce9f-4e98-bc7b-48fcb327a98f-kube-api-access-qxgcz\") on node \"crc\" DevicePath \"\"" Mar 21 04:30:11 crc kubenswrapper[4923]: I0321 04:30:11.776181 4923 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5079aa2d-ce9f-4e98-bc7b-48fcb327a98f-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 04:30:11 crc kubenswrapper[4923]: I0321 04:30:11.776194 4923 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5079aa2d-ce9f-4e98-bc7b-48fcb327a98f-util\") on node \"crc\" DevicePath \"\"" Mar 21 04:30:12 crc kubenswrapper[4923]: I0321 04:30:12.180081 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9" event={"ID":"5079aa2d-ce9f-4e98-bc7b-48fcb327a98f","Type":"ContainerDied","Data":"31443f61c67147b79e4149910e3450adf7217eda965ab4e99a3d0d8ed4f23e28"} Mar 21 04:30:12 crc kubenswrapper[4923]: I0321 04:30:12.180464 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31443f61c67147b79e4149910e3450adf7217eda965ab4e99a3d0d8ed4f23e28" Mar 21 04:30:12 crc kubenswrapper[4923]: I0321 04:30:12.180607 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9" Mar 21 04:30:13 crc kubenswrapper[4923]: I0321 04:30:13.189098 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpvzx" event={"ID":"ce5a383c-7a3e-448e-b930-05e9aa04558e","Type":"ContainerStarted","Data":"7f3a3e8854759e33e3df2c5cd588ec90822be6e03ae943ce13128bd0e5c027d5"} Mar 21 04:30:13 crc kubenswrapper[4923]: I0321 04:30:13.218729 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zpvzx" podStartSLOduration=2.14617462 podStartE2EDuration="5.218698856s" podCreationTimestamp="2026-03-21 04:30:08 +0000 UTC" firstStartedPulling="2026-03-21 04:30:09.14503906 +0000 UTC m=+774.298050167" lastFinishedPulling="2026-03-21 04:30:12.217563286 +0000 UTC m=+777.370574403" observedRunningTime="2026-03-21 04:30:13.216218654 +0000 UTC m=+778.369229781" watchObservedRunningTime="2026-03-21 04:30:13.218698856 +0000 UTC m=+778.371709983" Mar 21 04:30:15 crc kubenswrapper[4923]: I0321 04:30:15.357771 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt" Mar 21 04:30:15 crc kubenswrapper[4923]: I0321 04:30:15.358533 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt" Mar 21 04:30:15 crc kubenswrapper[4923]: I0321 04:30:15.579010 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt"] Mar 21 04:30:16 crc kubenswrapper[4923]: I0321 04:30:16.207258 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt" event={"ID":"405a0885-e7d1-4379-8a17-5e880c97d04a","Type":"ContainerStarted","Data":"32d4241cab18218cedf3462628b45776b51501184f9069f5e33af7097d9b3691"} Mar 21 04:30:16 crc kubenswrapper[4923]: I0321 04:30:16.207313 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt" event={"ID":"405a0885-e7d1-4379-8a17-5e880c97d04a","Type":"ContainerStarted","Data":"5150f4712869983f74c156d25c7d1f62756fcaa0c846dbea3df946538924ef67"} Mar 21 04:30:16 crc kubenswrapper[4923]: I0321 04:30:16.224440 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt" podStartSLOduration=16.224412302 podStartE2EDuration="16.224412302s" podCreationTimestamp="2026-03-21 04:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:30:16.220771716 +0000 UTC m=+781.373782823" watchObservedRunningTime="2026-03-21 04:30:16.224412302 +0000 UTC m=+781.377423409" Mar 21 04:30:16 crc kubenswrapper[4923]: I0321 04:30:16.360295 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567790-qlwn4" Mar 21 04:30:16 crc kubenswrapper[4923]: I0321 04:30:16.360700 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567790-qlwn4" Mar 21 04:30:16 crc kubenswrapper[4923]: I0321 04:30:16.562120 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567790-qlwn4"] Mar 21 04:30:17 crc kubenswrapper[4923]: I0321 04:30:17.212790 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567790-qlwn4" event={"ID":"1bf8a628-e7fd-4829-b620-f1bbea8efd52","Type":"ContainerStarted","Data":"9f1a52b626eb53813c1e135f12f54440712d9a95fc091a0a8de38158cb0996f0"} Mar 21 04:30:18 crc kubenswrapper[4923]: I0321 04:30:18.218677 4923 generic.go:334] "Generic (PLEG): container finished" podID="405a0885-e7d1-4379-8a17-5e880c97d04a" containerID="32d4241cab18218cedf3462628b45776b51501184f9069f5e33af7097d9b3691" exitCode=0 Mar 21 04:30:18 crc kubenswrapper[4923]: I0321 04:30:18.218790 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt" event={"ID":"405a0885-e7d1-4379-8a17-5e880c97d04a","Type":"ContainerDied","Data":"32d4241cab18218cedf3462628b45776b51501184f9069f5e33af7097d9b3691"} Mar 21 04:30:18 crc kubenswrapper[4923]: I0321 04:30:18.220382 4923 generic.go:334] "Generic (PLEG): container finished" podID="1bf8a628-e7fd-4829-b620-f1bbea8efd52" containerID="b88cdc3dc7e305c8295384aa37a8efda2778164f6d665c228d041f5ebcc00091" exitCode=0 Mar 21 04:30:18 crc kubenswrapper[4923]: I0321 04:30:18.220412 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567790-qlwn4" event={"ID":"1bf8a628-e7fd-4829-b620-f1bbea8efd52","Type":"ContainerDied","Data":"b88cdc3dc7e305c8295384aa37a8efda2778164f6d665c228d041f5ebcc00091"} Mar 21 04:30:18 crc kubenswrapper[4923]: I0321 04:30:18.661249 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zpvzx" Mar 21 04:30:18 crc kubenswrapper[4923]: I0321 04:30:18.661877 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zpvzx" Mar 21 04:30:19 crc kubenswrapper[4923]: I0321 04:30:19.497957 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt" Mar 21 04:30:19 crc kubenswrapper[4923]: I0321 04:30:19.505464 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567790-qlwn4" Mar 21 04:30:19 crc kubenswrapper[4923]: I0321 04:30:19.579058 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/405a0885-e7d1-4379-8a17-5e880c97d04a-config-volume\") pod \"405a0885-e7d1-4379-8a17-5e880c97d04a\" (UID: \"405a0885-e7d1-4379-8a17-5e880c97d04a\") " Mar 21 04:30:19 crc kubenswrapper[4923]: I0321 04:30:19.579142 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/405a0885-e7d1-4379-8a17-5e880c97d04a-secret-volume\") pod \"405a0885-e7d1-4379-8a17-5e880c97d04a\" (UID: \"405a0885-e7d1-4379-8a17-5e880c97d04a\") " Mar 21 04:30:19 crc kubenswrapper[4923]: I0321 04:30:19.579204 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvdtc\" (UniqueName: \"kubernetes.io/projected/405a0885-e7d1-4379-8a17-5e880c97d04a-kube-api-access-jvdtc\") pod \"405a0885-e7d1-4379-8a17-5e880c97d04a\" (UID: \"405a0885-e7d1-4379-8a17-5e880c97d04a\") " Mar 21 04:30:19 crc kubenswrapper[4923]: I0321 04:30:19.579240 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76zg6\" (UniqueName: \"kubernetes.io/projected/1bf8a628-e7fd-4829-b620-f1bbea8efd52-kube-api-access-76zg6\") pod \"1bf8a628-e7fd-4829-b620-f1bbea8efd52\" (UID: \"1bf8a628-e7fd-4829-b620-f1bbea8efd52\") " Mar 21 04:30:19 crc kubenswrapper[4923]: I0321 04:30:19.579859 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/405a0885-e7d1-4379-8a17-5e880c97d04a-config-volume" (OuterVolumeSpecName: "config-volume") pod "405a0885-e7d1-4379-8a17-5e880c97d04a" (UID: "405a0885-e7d1-4379-8a17-5e880c97d04a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:30:19 crc kubenswrapper[4923]: I0321 04:30:19.584056 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/405a0885-e7d1-4379-8a17-5e880c97d04a-kube-api-access-jvdtc" (OuterVolumeSpecName: "kube-api-access-jvdtc") pod "405a0885-e7d1-4379-8a17-5e880c97d04a" (UID: "405a0885-e7d1-4379-8a17-5e880c97d04a"). InnerVolumeSpecName "kube-api-access-jvdtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:30:19 crc kubenswrapper[4923]: I0321 04:30:19.584112 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/405a0885-e7d1-4379-8a17-5e880c97d04a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "405a0885-e7d1-4379-8a17-5e880c97d04a" (UID: "405a0885-e7d1-4379-8a17-5e880c97d04a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:30:19 crc kubenswrapper[4923]: I0321 04:30:19.584773 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf8a628-e7fd-4829-b620-f1bbea8efd52-kube-api-access-76zg6" (OuterVolumeSpecName: "kube-api-access-76zg6") pod "1bf8a628-e7fd-4829-b620-f1bbea8efd52" (UID: "1bf8a628-e7fd-4829-b620-f1bbea8efd52"). InnerVolumeSpecName "kube-api-access-76zg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:30:19 crc kubenswrapper[4923]: I0321 04:30:19.680128 4923 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/405a0885-e7d1-4379-8a17-5e880c97d04a-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 21 04:30:19 crc kubenswrapper[4923]: I0321 04:30:19.680463 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvdtc\" (UniqueName: \"kubernetes.io/projected/405a0885-e7d1-4379-8a17-5e880c97d04a-kube-api-access-jvdtc\") on node \"crc\" DevicePath \"\"" Mar 21 04:30:19 crc kubenswrapper[4923]: I0321 04:30:19.680477 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76zg6\" (UniqueName: \"kubernetes.io/projected/1bf8a628-e7fd-4829-b620-f1bbea8efd52-kube-api-access-76zg6\") on node \"crc\" DevicePath \"\"" Mar 21 04:30:19 crc kubenswrapper[4923]: I0321 04:30:19.680490 4923 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/405a0885-e7d1-4379-8a17-5e880c97d04a-config-volume\") on node \"crc\" DevicePath \"\"" Mar 21 04:30:19 crc kubenswrapper[4923]: I0321 04:30:19.728049 4923 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zpvzx" podUID="ce5a383c-7a3e-448e-b930-05e9aa04558e" containerName="registry-server" probeResult="failure" output=< Mar 21 04:30:19 crc kubenswrapper[4923]: timeout: failed to connect service ":50051" within 1s Mar 21 04:30:19 crc kubenswrapper[4923]: > Mar 21 04:30:20 crc kubenswrapper[4923]: I0321 04:30:20.233643 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt" event={"ID":"405a0885-e7d1-4379-8a17-5e880c97d04a","Type":"ContainerDied","Data":"5150f4712869983f74c156d25c7d1f62756fcaa0c846dbea3df946538924ef67"} Mar 21 04:30:20 crc kubenswrapper[4923]: I0321 04:30:20.233680 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5150f4712869983f74c156d25c7d1f62756fcaa0c846dbea3df946538924ef67" Mar 21 04:30:20 crc kubenswrapper[4923]: I0321 04:30:20.233723 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567790-nw5wt" Mar 21 04:30:20 crc kubenswrapper[4923]: I0321 04:30:20.235254 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567790-qlwn4" event={"ID":"1bf8a628-e7fd-4829-b620-f1bbea8efd52","Type":"ContainerDied","Data":"9f1a52b626eb53813c1e135f12f54440712d9a95fc091a0a8de38158cb0996f0"} Mar 21 04:30:20 crc kubenswrapper[4923]: I0321 04:30:20.235272 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f1a52b626eb53813c1e135f12f54440712d9a95fc091a0a8de38158cb0996f0" Mar 21 04:30:20 crc kubenswrapper[4923]: I0321 04:30:20.235361 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567790-qlwn4" Mar 21 04:30:20 crc kubenswrapper[4923]: I0321 04:30:20.600653 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567784-ls96m"] Mar 21 04:30:20 crc kubenswrapper[4923]: I0321 04:30:20.607387 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567784-ls96m"] Mar 21 04:30:22 crc kubenswrapper[4923]: I0321 04:30:22.368659 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc6afb5b-1e70-40d0-9ff9-8e6dafc40c00" path="/var/lib/kubelet/pods/bc6afb5b-1e70-40d0-9ff9-8e6dafc40c00/volumes" Mar 21 04:30:22 crc kubenswrapper[4923]: I0321 04:30:22.447616 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gx9n6" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.065007 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-fd8f45f-g4r57"] Mar 21 04:30:23 crc kubenswrapper[4923]: E0321 04:30:23.065493 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5079aa2d-ce9f-4e98-bc7b-48fcb327a98f" containerName="util" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.065505 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="5079aa2d-ce9f-4e98-bc7b-48fcb327a98f" containerName="util" Mar 21 04:30:23 crc kubenswrapper[4923]: E0321 04:30:23.065527 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf8a628-e7fd-4829-b620-f1bbea8efd52" containerName="oc" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.065534 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf8a628-e7fd-4829-b620-f1bbea8efd52" containerName="oc" Mar 21 04:30:23 crc kubenswrapper[4923]: E0321 04:30:23.065545 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5079aa2d-ce9f-4e98-bc7b-48fcb327a98f" containerName="pull" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.065553 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="5079aa2d-ce9f-4e98-bc7b-48fcb327a98f" containerName="pull" Mar 21 04:30:23 crc kubenswrapper[4923]: E0321 04:30:23.065563 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5079aa2d-ce9f-4e98-bc7b-48fcb327a98f" containerName="extract" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.065570 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="5079aa2d-ce9f-4e98-bc7b-48fcb327a98f" containerName="extract" Mar 21 04:30:23 crc kubenswrapper[4923]: E0321 04:30:23.065579 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="405a0885-e7d1-4379-8a17-5e880c97d04a" containerName="collect-profiles" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.065584 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="405a0885-e7d1-4379-8a17-5e880c97d04a" containerName="collect-profiles" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.065670 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bf8a628-e7fd-4829-b620-f1bbea8efd52" containerName="oc" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.065678 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="405a0885-e7d1-4379-8a17-5e880c97d04a" containerName="collect-profiles" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.065690 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="5079aa2d-ce9f-4e98-bc7b-48fcb327a98f" containerName="extract" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.066121 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-fd8f45f-g4r57" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.068555 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.068613 4923 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-pcftm" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.068555 4923 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.068666 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.068749 4923 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.120867 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/99dd2fb0-56d7-40c7-836f-2f004f9dc676-webhook-cert\") pod \"metallb-operator-controller-manager-fd8f45f-g4r57\" (UID: \"99dd2fb0-56d7-40c7-836f-2f004f9dc676\") " pod="metallb-system/metallb-operator-controller-manager-fd8f45f-g4r57" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.120958 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/99dd2fb0-56d7-40c7-836f-2f004f9dc676-apiservice-cert\") pod \"metallb-operator-controller-manager-fd8f45f-g4r57\" (UID: \"99dd2fb0-56d7-40c7-836f-2f004f9dc676\") " pod="metallb-system/metallb-operator-controller-manager-fd8f45f-g4r57" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.121000 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8nq5\" (UniqueName: \"kubernetes.io/projected/99dd2fb0-56d7-40c7-836f-2f004f9dc676-kube-api-access-v8nq5\") pod \"metallb-operator-controller-manager-fd8f45f-g4r57\" (UID: \"99dd2fb0-56d7-40c7-836f-2f004f9dc676\") " pod="metallb-system/metallb-operator-controller-manager-fd8f45f-g4r57" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.128682 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-fd8f45f-g4r57"] Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.222618 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/99dd2fb0-56d7-40c7-836f-2f004f9dc676-webhook-cert\") pod \"metallb-operator-controller-manager-fd8f45f-g4r57\" (UID: \"99dd2fb0-56d7-40c7-836f-2f004f9dc676\") " pod="metallb-system/metallb-operator-controller-manager-fd8f45f-g4r57" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.222695 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/99dd2fb0-56d7-40c7-836f-2f004f9dc676-apiservice-cert\") pod \"metallb-operator-controller-manager-fd8f45f-g4r57\" (UID: \"99dd2fb0-56d7-40c7-836f-2f004f9dc676\") " pod="metallb-system/metallb-operator-controller-manager-fd8f45f-g4r57" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.222723 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8nq5\" (UniqueName: \"kubernetes.io/projected/99dd2fb0-56d7-40c7-836f-2f004f9dc676-kube-api-access-v8nq5\") pod \"metallb-operator-controller-manager-fd8f45f-g4r57\" (UID: \"99dd2fb0-56d7-40c7-836f-2f004f9dc676\") " pod="metallb-system/metallb-operator-controller-manager-fd8f45f-g4r57" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.237348 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8nq5\" (UniqueName: \"kubernetes.io/projected/99dd2fb0-56d7-40c7-836f-2f004f9dc676-kube-api-access-v8nq5\") pod \"metallb-operator-controller-manager-fd8f45f-g4r57\" (UID: \"99dd2fb0-56d7-40c7-836f-2f004f9dc676\") " pod="metallb-system/metallb-operator-controller-manager-fd8f45f-g4r57" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.238850 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/99dd2fb0-56d7-40c7-836f-2f004f9dc676-apiservice-cert\") pod \"metallb-operator-controller-manager-fd8f45f-g4r57\" (UID: \"99dd2fb0-56d7-40c7-836f-2f004f9dc676\") " pod="metallb-system/metallb-operator-controller-manager-fd8f45f-g4r57" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.239801 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/99dd2fb0-56d7-40c7-836f-2f004f9dc676-webhook-cert\") pod \"metallb-operator-controller-manager-fd8f45f-g4r57\" (UID: \"99dd2fb0-56d7-40c7-836f-2f004f9dc676\") " pod="metallb-system/metallb-operator-controller-manager-fd8f45f-g4r57" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.313553 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5b974f9ffb-m2lc4"] Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.314121 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5b974f9ffb-m2lc4" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.315761 4923 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.316096 4923 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-4j6nc" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.316481 4923 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.330724 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5b974f9ffb-m2lc4"] Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.381240 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-fd8f45f-g4r57" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.424757 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgzsn\" (UniqueName: \"kubernetes.io/projected/ac4f10a7-8f47-40e1-9ca2-6f401c588c64-kube-api-access-cgzsn\") pod \"metallb-operator-webhook-server-5b974f9ffb-m2lc4\" (UID: \"ac4f10a7-8f47-40e1-9ca2-6f401c588c64\") " pod="metallb-system/metallb-operator-webhook-server-5b974f9ffb-m2lc4" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.424846 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ac4f10a7-8f47-40e1-9ca2-6f401c588c64-apiservice-cert\") pod \"metallb-operator-webhook-server-5b974f9ffb-m2lc4\" (UID: \"ac4f10a7-8f47-40e1-9ca2-6f401c588c64\") " pod="metallb-system/metallb-operator-webhook-server-5b974f9ffb-m2lc4" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.424864 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ac4f10a7-8f47-40e1-9ca2-6f401c588c64-webhook-cert\") pod \"metallb-operator-webhook-server-5b974f9ffb-m2lc4\" (UID: \"ac4f10a7-8f47-40e1-9ca2-6f401c588c64\") " pod="metallb-system/metallb-operator-webhook-server-5b974f9ffb-m2lc4" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.528258 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ac4f10a7-8f47-40e1-9ca2-6f401c588c64-apiservice-cert\") pod \"metallb-operator-webhook-server-5b974f9ffb-m2lc4\" (UID: \"ac4f10a7-8f47-40e1-9ca2-6f401c588c64\") " pod="metallb-system/metallb-operator-webhook-server-5b974f9ffb-m2lc4" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.528309 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ac4f10a7-8f47-40e1-9ca2-6f401c588c64-webhook-cert\") pod \"metallb-operator-webhook-server-5b974f9ffb-m2lc4\" (UID: \"ac4f10a7-8f47-40e1-9ca2-6f401c588c64\") " pod="metallb-system/metallb-operator-webhook-server-5b974f9ffb-m2lc4" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.528374 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgzsn\" (UniqueName: \"kubernetes.io/projected/ac4f10a7-8f47-40e1-9ca2-6f401c588c64-kube-api-access-cgzsn\") pod \"metallb-operator-webhook-server-5b974f9ffb-m2lc4\" (UID: \"ac4f10a7-8f47-40e1-9ca2-6f401c588c64\") " pod="metallb-system/metallb-operator-webhook-server-5b974f9ffb-m2lc4" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.535851 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ac4f10a7-8f47-40e1-9ca2-6f401c588c64-apiservice-cert\") pod \"metallb-operator-webhook-server-5b974f9ffb-m2lc4\" (UID: \"ac4f10a7-8f47-40e1-9ca2-6f401c588c64\") " pod="metallb-system/metallb-operator-webhook-server-5b974f9ffb-m2lc4" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.538706 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ac4f10a7-8f47-40e1-9ca2-6f401c588c64-webhook-cert\") pod \"metallb-operator-webhook-server-5b974f9ffb-m2lc4\" (UID: \"ac4f10a7-8f47-40e1-9ca2-6f401c588c64\") " pod="metallb-system/metallb-operator-webhook-server-5b974f9ffb-m2lc4" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.548223 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgzsn\" (UniqueName: \"kubernetes.io/projected/ac4f10a7-8f47-40e1-9ca2-6f401c588c64-kube-api-access-cgzsn\") pod \"metallb-operator-webhook-server-5b974f9ffb-m2lc4\" (UID: \"ac4f10a7-8f47-40e1-9ca2-6f401c588c64\") " pod="metallb-system/metallb-operator-webhook-server-5b974f9ffb-m2lc4" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.623679 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-fd8f45f-g4r57"] Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.627681 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5b974f9ffb-m2lc4" Mar 21 04:30:23 crc kubenswrapper[4923]: I0321 04:30:23.842138 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5b974f9ffb-m2lc4"] Mar 21 04:30:23 crc kubenswrapper[4923]: W0321 04:30:23.850562 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac4f10a7_8f47_40e1_9ca2_6f401c588c64.slice/crio-a9fc86469df865845f8df25ec719354a5b8b7fa4441e830256639530eab40d22 WatchSource:0}: Error finding container a9fc86469df865845f8df25ec719354a5b8b7fa4441e830256639530eab40d22: Status 404 returned error can't find the container with id a9fc86469df865845f8df25ec719354a5b8b7fa4441e830256639530eab40d22 Mar 21 04:30:24 crc kubenswrapper[4923]: I0321 04:30:24.260025 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5b974f9ffb-m2lc4" event={"ID":"ac4f10a7-8f47-40e1-9ca2-6f401c588c64","Type":"ContainerStarted","Data":"a9fc86469df865845f8df25ec719354a5b8b7fa4441e830256639530eab40d22"} Mar 21 04:30:24 crc kubenswrapper[4923]: I0321 04:30:24.261081 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-fd8f45f-g4r57" event={"ID":"99dd2fb0-56d7-40c7-836f-2f004f9dc676","Type":"ContainerStarted","Data":"5e9f822bae35fa498f04e3a970631e590d1a08c8f1c6149e5796b847a5a5a915"} Mar 21 04:30:28 crc kubenswrapper[4923]: I0321 04:30:28.725175 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zpvzx" Mar 21 04:30:28 crc kubenswrapper[4923]: I0321 04:30:28.780475 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zpvzx" Mar 21 04:30:29 crc kubenswrapper[4923]: I0321 04:30:29.298070 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5b974f9ffb-m2lc4" event={"ID":"ac4f10a7-8f47-40e1-9ca2-6f401c588c64","Type":"ContainerStarted","Data":"124291a629beb705a1afa7c0d8ef6b0ce43dba6cf442ec85936bebefb5ee2d82"} Mar 21 04:30:29 crc kubenswrapper[4923]: I0321 04:30:29.299928 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5b974f9ffb-m2lc4" Mar 21 04:30:29 crc kubenswrapper[4923]: I0321 04:30:29.302654 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-fd8f45f-g4r57" event={"ID":"99dd2fb0-56d7-40c7-836f-2f004f9dc676","Type":"ContainerStarted","Data":"c1727d3d9aaa272817a1ef5ab46791c0b3b5c8b773d561517c328db1e2b9c0b5"} Mar 21 04:30:29 crc kubenswrapper[4923]: I0321 04:30:29.302936 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-fd8f45f-g4r57" Mar 21 04:30:29 crc kubenswrapper[4923]: I0321 04:30:29.332565 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5b974f9ffb-m2lc4" podStartSLOduration=2.017582687 podStartE2EDuration="6.332528103s" podCreationTimestamp="2026-03-21 04:30:23 +0000 UTC" firstStartedPulling="2026-03-21 04:30:23.853051751 +0000 UTC m=+789.006062848" lastFinishedPulling="2026-03-21 04:30:28.167997177 +0000 UTC m=+793.321008264" observedRunningTime="2026-03-21 04:30:29.325952302 +0000 UTC m=+794.478963419" watchObservedRunningTime="2026-03-21 04:30:29.332528103 +0000 UTC m=+794.485539260" Mar 21 04:30:29 crc kubenswrapper[4923]: I0321 04:30:29.419506 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-fd8f45f-g4r57" podStartSLOduration=1.9185196549999999 podStartE2EDuration="6.419484963s" podCreationTimestamp="2026-03-21 04:30:23 +0000 UTC" firstStartedPulling="2026-03-21 04:30:23.651470348 +0000 UTC m=+788.804481435" lastFinishedPulling="2026-03-21 04:30:28.152435656 +0000 UTC m=+793.305446743" observedRunningTime="2026-03-21 04:30:29.413815299 +0000 UTC m=+794.566826396" watchObservedRunningTime="2026-03-21 04:30:29.419484963 +0000 UTC m=+794.572496050" Mar 21 04:30:30 crc kubenswrapper[4923]: I0321 04:30:30.286627 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zpvzx"] Mar 21 04:30:30 crc kubenswrapper[4923]: I0321 04:30:30.308598 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zpvzx" podUID="ce5a383c-7a3e-448e-b930-05e9aa04558e" containerName="registry-server" containerID="cri-o://7f3a3e8854759e33e3df2c5cd588ec90822be6e03ae943ce13128bd0e5c027d5" gracePeriod=2 Mar 21 04:30:30 crc kubenswrapper[4923]: I0321 04:30:30.659339 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zpvzx" Mar 21 04:30:30 crc kubenswrapper[4923]: I0321 04:30:30.726774 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce5a383c-7a3e-448e-b930-05e9aa04558e-catalog-content\") pod \"ce5a383c-7a3e-448e-b930-05e9aa04558e\" (UID: \"ce5a383c-7a3e-448e-b930-05e9aa04558e\") " Mar 21 04:30:30 crc kubenswrapper[4923]: I0321 04:30:30.726844 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb7sl\" (UniqueName: \"kubernetes.io/projected/ce5a383c-7a3e-448e-b930-05e9aa04558e-kube-api-access-xb7sl\") pod \"ce5a383c-7a3e-448e-b930-05e9aa04558e\" (UID: \"ce5a383c-7a3e-448e-b930-05e9aa04558e\") " Mar 21 04:30:30 crc kubenswrapper[4923]: I0321 04:30:30.726965 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce5a383c-7a3e-448e-b930-05e9aa04558e-utilities\") pod \"ce5a383c-7a3e-448e-b930-05e9aa04558e\" (UID: \"ce5a383c-7a3e-448e-b930-05e9aa04558e\") " Mar 21 04:30:30 crc kubenswrapper[4923]: I0321 04:30:30.727896 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce5a383c-7a3e-448e-b930-05e9aa04558e-utilities" (OuterVolumeSpecName: "utilities") pod "ce5a383c-7a3e-448e-b930-05e9aa04558e" (UID: "ce5a383c-7a3e-448e-b930-05e9aa04558e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:30:30 crc kubenswrapper[4923]: I0321 04:30:30.733353 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce5a383c-7a3e-448e-b930-05e9aa04558e-kube-api-access-xb7sl" (OuterVolumeSpecName: "kube-api-access-xb7sl") pod "ce5a383c-7a3e-448e-b930-05e9aa04558e" (UID: "ce5a383c-7a3e-448e-b930-05e9aa04558e"). InnerVolumeSpecName "kube-api-access-xb7sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:30:30 crc kubenswrapper[4923]: I0321 04:30:30.828395 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb7sl\" (UniqueName: \"kubernetes.io/projected/ce5a383c-7a3e-448e-b930-05e9aa04558e-kube-api-access-xb7sl\") on node \"crc\" DevicePath \"\"" Mar 21 04:30:30 crc kubenswrapper[4923]: I0321 04:30:30.828433 4923 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce5a383c-7a3e-448e-b930-05e9aa04558e-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 04:30:30 crc kubenswrapper[4923]: I0321 04:30:30.850123 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce5a383c-7a3e-448e-b930-05e9aa04558e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce5a383c-7a3e-448e-b930-05e9aa04558e" (UID: "ce5a383c-7a3e-448e-b930-05e9aa04558e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:30:30 crc kubenswrapper[4923]: I0321 04:30:30.929345 4923 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce5a383c-7a3e-448e-b930-05e9aa04558e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 04:30:31 crc kubenswrapper[4923]: I0321 04:30:31.318741 4923 generic.go:334] "Generic (PLEG): container finished" podID="ce5a383c-7a3e-448e-b930-05e9aa04558e" containerID="7f3a3e8854759e33e3df2c5cd588ec90822be6e03ae943ce13128bd0e5c027d5" exitCode=0 Mar 21 04:30:31 crc kubenswrapper[4923]: I0321 04:30:31.318786 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpvzx" event={"ID":"ce5a383c-7a3e-448e-b930-05e9aa04558e","Type":"ContainerDied","Data":"7f3a3e8854759e33e3df2c5cd588ec90822be6e03ae943ce13128bd0e5c027d5"} Mar 21 04:30:31 crc kubenswrapper[4923]: I0321 04:30:31.318821 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpvzx" event={"ID":"ce5a383c-7a3e-448e-b930-05e9aa04558e","Type":"ContainerDied","Data":"60962e67d2c3a64e61f8451b3cafbf29996aeb936aadccae4464793934323183"} Mar 21 04:30:31 crc kubenswrapper[4923]: I0321 04:30:31.318838 4923 scope.go:117] "RemoveContainer" containerID="7f3a3e8854759e33e3df2c5cd588ec90822be6e03ae943ce13128bd0e5c027d5" Mar 21 04:30:31 crc kubenswrapper[4923]: I0321 04:30:31.318863 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zpvzx" Mar 21 04:30:31 crc kubenswrapper[4923]: I0321 04:30:31.340134 4923 scope.go:117] "RemoveContainer" containerID="cb9b701dc95cab96acf256ca0a5c221aa8797427d0c2d021de1ab860e1334ae0" Mar 21 04:30:31 crc kubenswrapper[4923]: I0321 04:30:31.351973 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zpvzx"] Mar 21 04:30:31 crc kubenswrapper[4923]: I0321 04:30:31.360942 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zpvzx"] Mar 21 04:30:31 crc kubenswrapper[4923]: I0321 04:30:31.367725 4923 scope.go:117] "RemoveContainer" containerID="fa2fe1bc0b05651e8c7a85f7ff3d97ec270fc9cff351d94ad82989e238b94b2c" Mar 21 04:30:31 crc kubenswrapper[4923]: I0321 04:30:31.414188 4923 scope.go:117] "RemoveContainer" containerID="7f3a3e8854759e33e3df2c5cd588ec90822be6e03ae943ce13128bd0e5c027d5" Mar 21 04:30:31 crc kubenswrapper[4923]: E0321 04:30:31.415100 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f3a3e8854759e33e3df2c5cd588ec90822be6e03ae943ce13128bd0e5c027d5\": container with ID starting with 7f3a3e8854759e33e3df2c5cd588ec90822be6e03ae943ce13128bd0e5c027d5 not found: ID does not exist" containerID="7f3a3e8854759e33e3df2c5cd588ec90822be6e03ae943ce13128bd0e5c027d5" Mar 21 04:30:31 crc kubenswrapper[4923]: I0321 04:30:31.415144 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f3a3e8854759e33e3df2c5cd588ec90822be6e03ae943ce13128bd0e5c027d5"} err="failed to get container status \"7f3a3e8854759e33e3df2c5cd588ec90822be6e03ae943ce13128bd0e5c027d5\": rpc error: code = NotFound desc = could not find container \"7f3a3e8854759e33e3df2c5cd588ec90822be6e03ae943ce13128bd0e5c027d5\": container with ID starting with 7f3a3e8854759e33e3df2c5cd588ec90822be6e03ae943ce13128bd0e5c027d5 not found: ID does not exist" Mar 21 04:30:31 crc kubenswrapper[4923]: I0321 04:30:31.415170 4923 scope.go:117] "RemoveContainer" containerID="cb9b701dc95cab96acf256ca0a5c221aa8797427d0c2d021de1ab860e1334ae0" Mar 21 04:30:31 crc kubenswrapper[4923]: E0321 04:30:31.415771 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb9b701dc95cab96acf256ca0a5c221aa8797427d0c2d021de1ab860e1334ae0\": container with ID starting with cb9b701dc95cab96acf256ca0a5c221aa8797427d0c2d021de1ab860e1334ae0 not found: ID does not exist" containerID="cb9b701dc95cab96acf256ca0a5c221aa8797427d0c2d021de1ab860e1334ae0" Mar 21 04:30:31 crc kubenswrapper[4923]: I0321 04:30:31.415833 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb9b701dc95cab96acf256ca0a5c221aa8797427d0c2d021de1ab860e1334ae0"} err="failed to get container status \"cb9b701dc95cab96acf256ca0a5c221aa8797427d0c2d021de1ab860e1334ae0\": rpc error: code = NotFound desc = could not find container \"cb9b701dc95cab96acf256ca0a5c221aa8797427d0c2d021de1ab860e1334ae0\": container with ID starting with cb9b701dc95cab96acf256ca0a5c221aa8797427d0c2d021de1ab860e1334ae0 not found: ID does not exist" Mar 21 04:30:31 crc kubenswrapper[4923]: I0321 04:30:31.415882 4923 scope.go:117] "RemoveContainer" containerID="fa2fe1bc0b05651e8c7a85f7ff3d97ec270fc9cff351d94ad82989e238b94b2c" Mar 21 04:30:31 crc kubenswrapper[4923]: E0321 04:30:31.416393 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa2fe1bc0b05651e8c7a85f7ff3d97ec270fc9cff351d94ad82989e238b94b2c\": container with ID starting with fa2fe1bc0b05651e8c7a85f7ff3d97ec270fc9cff351d94ad82989e238b94b2c not found: ID does not exist" containerID="fa2fe1bc0b05651e8c7a85f7ff3d97ec270fc9cff351d94ad82989e238b94b2c" Mar 21 04:30:31 crc kubenswrapper[4923]: I0321 04:30:31.416417 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa2fe1bc0b05651e8c7a85f7ff3d97ec270fc9cff351d94ad82989e238b94b2c"} err="failed to get container status \"fa2fe1bc0b05651e8c7a85f7ff3d97ec270fc9cff351d94ad82989e238b94b2c\": rpc error: code = NotFound desc = could not find container \"fa2fe1bc0b05651e8c7a85f7ff3d97ec270fc9cff351d94ad82989e238b94b2c\": container with ID starting with fa2fe1bc0b05651e8c7a85f7ff3d97ec270fc9cff351d94ad82989e238b94b2c not found: ID does not exist" Mar 21 04:30:32 crc kubenswrapper[4923]: I0321 04:30:32.371864 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce5a383c-7a3e-448e-b930-05e9aa04558e" path="/var/lib/kubelet/pods/ce5a383c-7a3e-448e-b930-05e9aa04558e/volumes" Mar 21 04:30:33 crc kubenswrapper[4923]: I0321 04:30:33.236324 4923 patch_prober.go:28] interesting pod/machine-config-daemon-cv5gr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 04:30:33 crc kubenswrapper[4923]: I0321 04:30:33.236409 4923 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 04:30:43 crc kubenswrapper[4923]: I0321 04:30:43.633642 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5b974f9ffb-m2lc4" Mar 21 04:31:01 crc kubenswrapper[4923]: I0321 04:31:01.587275 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nh9q4"] Mar 21 04:31:01 crc kubenswrapper[4923]: E0321 04:31:01.588240 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce5a383c-7a3e-448e-b930-05e9aa04558e" containerName="extract-content" Mar 21 04:31:01 crc kubenswrapper[4923]: I0321 04:31:01.588263 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce5a383c-7a3e-448e-b930-05e9aa04558e" containerName="extract-content" Mar 21 04:31:01 crc kubenswrapper[4923]: E0321 04:31:01.588281 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce5a383c-7a3e-448e-b930-05e9aa04558e" containerName="extract-utilities" Mar 21 04:31:01 crc kubenswrapper[4923]: I0321 04:31:01.588294 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce5a383c-7a3e-448e-b930-05e9aa04558e" containerName="extract-utilities" Mar 21 04:31:01 crc kubenswrapper[4923]: E0321 04:31:01.588310 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce5a383c-7a3e-448e-b930-05e9aa04558e" containerName="registry-server" Mar 21 04:31:01 crc kubenswrapper[4923]: I0321 04:31:01.588349 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce5a383c-7a3e-448e-b930-05e9aa04558e" containerName="registry-server" Mar 21 04:31:01 crc kubenswrapper[4923]: I0321 04:31:01.588547 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce5a383c-7a3e-448e-b930-05e9aa04558e" containerName="registry-server" Mar 21 04:31:01 crc kubenswrapper[4923]: I0321 04:31:01.589781 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nh9q4" Mar 21 04:31:01 crc kubenswrapper[4923]: I0321 04:31:01.609551 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nh9q4"] Mar 21 04:31:01 crc kubenswrapper[4923]: I0321 04:31:01.661102 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2f3e6cd-4f13-4fb5-93f9-28359a198749-utilities\") pod \"community-operators-nh9q4\" (UID: \"d2f3e6cd-4f13-4fb5-93f9-28359a198749\") " pod="openshift-marketplace/community-operators-nh9q4" Mar 21 04:31:01 crc kubenswrapper[4923]: I0321 04:31:01.661215 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2f3e6cd-4f13-4fb5-93f9-28359a198749-catalog-content\") pod \"community-operators-nh9q4\" (UID: \"d2f3e6cd-4f13-4fb5-93f9-28359a198749\") " pod="openshift-marketplace/community-operators-nh9q4" Mar 21 04:31:01 crc kubenswrapper[4923]: I0321 04:31:01.661288 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxc9k\" (UniqueName: \"kubernetes.io/projected/d2f3e6cd-4f13-4fb5-93f9-28359a198749-kube-api-access-lxc9k\") pod \"community-operators-nh9q4\" (UID: \"d2f3e6cd-4f13-4fb5-93f9-28359a198749\") " pod="openshift-marketplace/community-operators-nh9q4" Mar 21 04:31:01 crc kubenswrapper[4923]: I0321 04:31:01.762979 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2f3e6cd-4f13-4fb5-93f9-28359a198749-utilities\") pod \"community-operators-nh9q4\" (UID: \"d2f3e6cd-4f13-4fb5-93f9-28359a198749\") " pod="openshift-marketplace/community-operators-nh9q4" Mar 21 04:31:01 crc kubenswrapper[4923]: I0321 04:31:01.763065 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2f3e6cd-4f13-4fb5-93f9-28359a198749-catalog-content\") pod \"community-operators-nh9q4\" (UID: \"d2f3e6cd-4f13-4fb5-93f9-28359a198749\") " pod="openshift-marketplace/community-operators-nh9q4" Mar 21 04:31:01 crc kubenswrapper[4923]: I0321 04:31:01.763130 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxc9k\" (UniqueName: \"kubernetes.io/projected/d2f3e6cd-4f13-4fb5-93f9-28359a198749-kube-api-access-lxc9k\") pod \"community-operators-nh9q4\" (UID: \"d2f3e6cd-4f13-4fb5-93f9-28359a198749\") " pod="openshift-marketplace/community-operators-nh9q4" Mar 21 04:31:01 crc kubenswrapper[4923]: I0321 04:31:01.763639 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2f3e6cd-4f13-4fb5-93f9-28359a198749-utilities\") pod \"community-operators-nh9q4\" (UID: \"d2f3e6cd-4f13-4fb5-93f9-28359a198749\") " pod="openshift-marketplace/community-operators-nh9q4" Mar 21 04:31:01 crc kubenswrapper[4923]: I0321 04:31:01.763650 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2f3e6cd-4f13-4fb5-93f9-28359a198749-catalog-content\") pod \"community-operators-nh9q4\" (UID: \"d2f3e6cd-4f13-4fb5-93f9-28359a198749\") " pod="openshift-marketplace/community-operators-nh9q4" Mar 21 04:31:01 crc kubenswrapper[4923]: I0321 04:31:01.788475 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxc9k\" (UniqueName: \"kubernetes.io/projected/d2f3e6cd-4f13-4fb5-93f9-28359a198749-kube-api-access-lxc9k\") pod \"community-operators-nh9q4\" (UID: \"d2f3e6cd-4f13-4fb5-93f9-28359a198749\") " pod="openshift-marketplace/community-operators-nh9q4" Mar 21 04:31:01 crc kubenswrapper[4923]: I0321 04:31:01.921589 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nh9q4" Mar 21 04:31:02 crc kubenswrapper[4923]: I0321 04:31:02.193002 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nh9q4"] Mar 21 04:31:02 crc kubenswrapper[4923]: I0321 04:31:02.535420 4923 generic.go:334] "Generic (PLEG): container finished" podID="d2f3e6cd-4f13-4fb5-93f9-28359a198749" containerID="86197f5c1f590dd78a3d859ceceb703a539a0e9d96296fe9bf37d75bf57ad7c2" exitCode=0 Mar 21 04:31:02 crc kubenswrapper[4923]: I0321 04:31:02.535491 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh9q4" event={"ID":"d2f3e6cd-4f13-4fb5-93f9-28359a198749","Type":"ContainerDied","Data":"86197f5c1f590dd78a3d859ceceb703a539a0e9d96296fe9bf37d75bf57ad7c2"} Mar 21 04:31:02 crc kubenswrapper[4923]: I0321 04:31:02.535533 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh9q4" event={"ID":"d2f3e6cd-4f13-4fb5-93f9-28359a198749","Type":"ContainerStarted","Data":"892ce841fa2d3430e8f69be8cfa80bb0f86f41485d4b0068a6072cfc57de9892"} Mar 21 04:31:02 crc kubenswrapper[4923]: I0321 04:31:02.537101 4923 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 21 04:31:03 crc kubenswrapper[4923]: I0321 04:31:03.236148 4923 patch_prober.go:28] interesting pod/machine-config-daemon-cv5gr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 04:31:03 crc kubenswrapper[4923]: I0321 04:31:03.236525 4923 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 04:31:03 crc kubenswrapper[4923]: I0321 04:31:03.236583 4923 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" Mar 21 04:31:03 crc kubenswrapper[4923]: I0321 04:31:03.237229 4923 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9ca9400ca4c664dfef2280af9aecd52b539e59a9f840922ab22c0e27838ee22c"} pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 21 04:31:03 crc kubenswrapper[4923]: I0321 04:31:03.237288 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" containerID="cri-o://9ca9400ca4c664dfef2280af9aecd52b539e59a9f840922ab22c0e27838ee22c" gracePeriod=600 Mar 21 04:31:03 crc kubenswrapper[4923]: I0321 04:31:03.385959 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-fd8f45f-g4r57" Mar 21 04:31:03 crc kubenswrapper[4923]: I0321 04:31:03.542899 4923 generic.go:334] "Generic (PLEG): container finished" podID="d2f3e6cd-4f13-4fb5-93f9-28359a198749" containerID="a0f2ef08c546a6814fcbe30e514b065d8837de243547b54fc0c929b02e36c2b2" exitCode=0 Mar 21 04:31:03 crc kubenswrapper[4923]: I0321 04:31:03.543024 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh9q4" event={"ID":"d2f3e6cd-4f13-4fb5-93f9-28359a198749","Type":"ContainerDied","Data":"a0f2ef08c546a6814fcbe30e514b065d8837de243547b54fc0c929b02e36c2b2"} Mar 21 04:31:03 crc kubenswrapper[4923]: I0321 04:31:03.545547 4923 generic.go:334] "Generic (PLEG): container finished" podID="34cdf206-b121-415c-ae40-21245192e724" containerID="9ca9400ca4c664dfef2280af9aecd52b539e59a9f840922ab22c0e27838ee22c" exitCode=0 Mar 21 04:31:03 crc kubenswrapper[4923]: I0321 04:31:03.545598 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" event={"ID":"34cdf206-b121-415c-ae40-21245192e724","Type":"ContainerDied","Data":"9ca9400ca4c664dfef2280af9aecd52b539e59a9f840922ab22c0e27838ee22c"} Mar 21 04:31:03 crc kubenswrapper[4923]: I0321 04:31:03.545637 4923 scope.go:117] "RemoveContainer" containerID="48f63297ab4f945959ffea8973b2bacfde5b52e3247e4ff6ad014d7031e1c7df" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.076951 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-6t644"] Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.078536 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-6t644" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.080125 4923 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.080403 4923 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-7rndn" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.083965 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-l7zhk"] Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.086686 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-l7zhk" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.090174 4923 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.093502 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.096536 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-6t644"] Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.150477 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-rw9p8"] Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.151280 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-rw9p8" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.155446 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.156000 4923 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-cq94p" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.156537 4923 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.156699 4923 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.170299 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-748tf"] Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.171145 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-748tf" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.173164 4923 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.192472 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-748tf"] Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.198980 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/11798d91-f8f0-4ba6-9386-b0876b78d927-frr-conf\") pod \"frr-k8s-l7zhk\" (UID: \"11798d91-f8f0-4ba6-9386-b0876b78d927\") " pod="metallb-system/frr-k8s-l7zhk" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.199050 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11798d91-f8f0-4ba6-9386-b0876b78d927-metrics-certs\") pod \"frr-k8s-l7zhk\" (UID: \"11798d91-f8f0-4ba6-9386-b0876b78d927\") " pod="metallb-system/frr-k8s-l7zhk" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.199071 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/11798d91-f8f0-4ba6-9386-b0876b78d927-frr-startup\") pod \"frr-k8s-l7zhk\" (UID: \"11798d91-f8f0-4ba6-9386-b0876b78d927\") " pod="metallb-system/frr-k8s-l7zhk" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.199091 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/11798d91-f8f0-4ba6-9386-b0876b78d927-metrics\") pod \"frr-k8s-l7zhk\" (UID: \"11798d91-f8f0-4ba6-9386-b0876b78d927\") " pod="metallb-system/frr-k8s-l7zhk" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.199110 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/11798d91-f8f0-4ba6-9386-b0876b78d927-frr-sockets\") pod \"frr-k8s-l7zhk\" (UID: \"11798d91-f8f0-4ba6-9386-b0876b78d927\") " pod="metallb-system/frr-k8s-l7zhk" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.199145 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssd97\" (UniqueName: \"kubernetes.io/projected/a9dca852-085f-4e4a-9ded-ffb15aada6cb-kube-api-access-ssd97\") pod \"frr-k8s-webhook-server-bcc4b6f68-6t644\" (UID: \"a9dca852-085f-4e4a-9ded-ffb15aada6cb\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-6t644" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.199168 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb65d\" (UniqueName: \"kubernetes.io/projected/11798d91-f8f0-4ba6-9386-b0876b78d927-kube-api-access-mb65d\") pod \"frr-k8s-l7zhk\" (UID: \"11798d91-f8f0-4ba6-9386-b0876b78d927\") " pod="metallb-system/frr-k8s-l7zhk" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.199188 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9dca852-085f-4e4a-9ded-ffb15aada6cb-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-6t644\" (UID: \"a9dca852-085f-4e4a-9ded-ffb15aada6cb\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-6t644" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.199214 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/11798d91-f8f0-4ba6-9386-b0876b78d927-reloader\") pod \"frr-k8s-l7zhk\" (UID: \"11798d91-f8f0-4ba6-9386-b0876b78d927\") " pod="metallb-system/frr-k8s-l7zhk" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.300480 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/11798d91-f8f0-4ba6-9386-b0876b78d927-frr-conf\") pod \"frr-k8s-l7zhk\" (UID: \"11798d91-f8f0-4ba6-9386-b0876b78d927\") " pod="metallb-system/frr-k8s-l7zhk" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.300549 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8f2886c8-0371-44b8-b2bc-59dfd3a193f6-metallb-excludel2\") pod \"speaker-rw9p8\" (UID: \"8f2886c8-0371-44b8-b2bc-59dfd3a193f6\") " pod="metallb-system/speaker-rw9p8" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.300583 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cc875221-d66d-43a1-83ab-42059357491d-cert\") pod \"controller-7bb4cc7c98-748tf\" (UID: \"cc875221-d66d-43a1-83ab-42059357491d\") " pod="metallb-system/controller-7bb4cc7c98-748tf" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.300628 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8f2886c8-0371-44b8-b2bc-59dfd3a193f6-memberlist\") pod \"speaker-rw9p8\" (UID: \"8f2886c8-0371-44b8-b2bc-59dfd3a193f6\") " pod="metallb-system/speaker-rw9p8" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.300659 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbqpx\" (UniqueName: \"kubernetes.io/projected/8f2886c8-0371-44b8-b2bc-59dfd3a193f6-kube-api-access-lbqpx\") pod \"speaker-rw9p8\" (UID: \"8f2886c8-0371-44b8-b2bc-59dfd3a193f6\") " pod="metallb-system/speaker-rw9p8" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.300688 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11798d91-f8f0-4ba6-9386-b0876b78d927-metrics-certs\") pod \"frr-k8s-l7zhk\" (UID: \"11798d91-f8f0-4ba6-9386-b0876b78d927\") " pod="metallb-system/frr-k8s-l7zhk" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.300713 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/11798d91-f8f0-4ba6-9386-b0876b78d927-frr-startup\") pod \"frr-k8s-l7zhk\" (UID: \"11798d91-f8f0-4ba6-9386-b0876b78d927\") " pod="metallb-system/frr-k8s-l7zhk" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.300736 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/11798d91-f8f0-4ba6-9386-b0876b78d927-metrics\") pod \"frr-k8s-l7zhk\" (UID: \"11798d91-f8f0-4ba6-9386-b0876b78d927\") " pod="metallb-system/frr-k8s-l7zhk" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.300757 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/11798d91-f8f0-4ba6-9386-b0876b78d927-frr-sockets\") pod \"frr-k8s-l7zhk\" (UID: \"11798d91-f8f0-4ba6-9386-b0876b78d927\") " pod="metallb-system/frr-k8s-l7zhk" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.300792 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssd97\" (UniqueName: \"kubernetes.io/projected/a9dca852-085f-4e4a-9ded-ffb15aada6cb-kube-api-access-ssd97\") pod \"frr-k8s-webhook-server-bcc4b6f68-6t644\" (UID: \"a9dca852-085f-4e4a-9ded-ffb15aada6cb\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-6t644" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.300849 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb65d\" (UniqueName: \"kubernetes.io/projected/11798d91-f8f0-4ba6-9386-b0876b78d927-kube-api-access-mb65d\") pod \"frr-k8s-l7zhk\" (UID: \"11798d91-f8f0-4ba6-9386-b0876b78d927\") " pod="metallb-system/frr-k8s-l7zhk" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.300875 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9dca852-085f-4e4a-9ded-ffb15aada6cb-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-6t644\" (UID: \"a9dca852-085f-4e4a-9ded-ffb15aada6cb\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-6t644" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.300902 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc875221-d66d-43a1-83ab-42059357491d-metrics-certs\") pod \"controller-7bb4cc7c98-748tf\" (UID: \"cc875221-d66d-43a1-83ab-42059357491d\") " pod="metallb-system/controller-7bb4cc7c98-748tf" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.300928 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/11798d91-f8f0-4ba6-9386-b0876b78d927-reloader\") pod \"frr-k8s-l7zhk\" (UID: \"11798d91-f8f0-4ba6-9386-b0876b78d927\") " pod="metallb-system/frr-k8s-l7zhk" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.300948 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2f9k\" (UniqueName: \"kubernetes.io/projected/cc875221-d66d-43a1-83ab-42059357491d-kube-api-access-v2f9k\") pod \"controller-7bb4cc7c98-748tf\" (UID: \"cc875221-d66d-43a1-83ab-42059357491d\") " pod="metallb-system/controller-7bb4cc7c98-748tf" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.300983 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8f2886c8-0371-44b8-b2bc-59dfd3a193f6-metrics-certs\") pod \"speaker-rw9p8\" (UID: \"8f2886c8-0371-44b8-b2bc-59dfd3a193f6\") " pod="metallb-system/speaker-rw9p8" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.301565 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/11798d91-f8f0-4ba6-9386-b0876b78d927-frr-conf\") pod \"frr-k8s-l7zhk\" (UID: \"11798d91-f8f0-4ba6-9386-b0876b78d927\") " pod="metallb-system/frr-k8s-l7zhk" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.301735 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/11798d91-f8f0-4ba6-9386-b0876b78d927-metrics\") pod \"frr-k8s-l7zhk\" (UID: \"11798d91-f8f0-4ba6-9386-b0876b78d927\") " pod="metallb-system/frr-k8s-l7zhk" Mar 21 04:31:04 crc kubenswrapper[4923]: E0321 04:31:04.301775 4923 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.301808 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/11798d91-f8f0-4ba6-9386-b0876b78d927-reloader\") pod \"frr-k8s-l7zhk\" (UID: \"11798d91-f8f0-4ba6-9386-b0876b78d927\") " pod="metallb-system/frr-k8s-l7zhk" Mar 21 04:31:04 crc kubenswrapper[4923]: E0321 04:31:04.301848 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11798d91-f8f0-4ba6-9386-b0876b78d927-metrics-certs podName:11798d91-f8f0-4ba6-9386-b0876b78d927 nodeName:}" failed. No retries permitted until 2026-03-21 04:31:04.801821409 +0000 UTC m=+829.954832496 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11798d91-f8f0-4ba6-9386-b0876b78d927-metrics-certs") pod "frr-k8s-l7zhk" (UID: "11798d91-f8f0-4ba6-9386-b0876b78d927") : secret "frr-k8s-certs-secret" not found Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.302002 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/11798d91-f8f0-4ba6-9386-b0876b78d927-frr-sockets\") pod \"frr-k8s-l7zhk\" (UID: \"11798d91-f8f0-4ba6-9386-b0876b78d927\") " pod="metallb-system/frr-k8s-l7zhk" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.302681 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/11798d91-f8f0-4ba6-9386-b0876b78d927-frr-startup\") pod \"frr-k8s-l7zhk\" (UID: \"11798d91-f8f0-4ba6-9386-b0876b78d927\") " pod="metallb-system/frr-k8s-l7zhk" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.312347 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9dca852-085f-4e4a-9ded-ffb15aada6cb-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-6t644\" (UID: \"a9dca852-085f-4e4a-9ded-ffb15aada6cb\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-6t644" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.320908 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssd97\" (UniqueName: \"kubernetes.io/projected/a9dca852-085f-4e4a-9ded-ffb15aada6cb-kube-api-access-ssd97\") pod \"frr-k8s-webhook-server-bcc4b6f68-6t644\" (UID: \"a9dca852-085f-4e4a-9ded-ffb15aada6cb\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-6t644" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.326083 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb65d\" (UniqueName: \"kubernetes.io/projected/11798d91-f8f0-4ba6-9386-b0876b78d927-kube-api-access-mb65d\") pod \"frr-k8s-l7zhk\" (UID: \"11798d91-f8f0-4ba6-9386-b0876b78d927\") " pod="metallb-system/frr-k8s-l7zhk" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.402593 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8f2886c8-0371-44b8-b2bc-59dfd3a193f6-metrics-certs\") pod \"speaker-rw9p8\" (UID: \"8f2886c8-0371-44b8-b2bc-59dfd3a193f6\") " pod="metallb-system/speaker-rw9p8" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.402661 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8f2886c8-0371-44b8-b2bc-59dfd3a193f6-metallb-excludel2\") pod \"speaker-rw9p8\" (UID: \"8f2886c8-0371-44b8-b2bc-59dfd3a193f6\") " pod="metallb-system/speaker-rw9p8" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.402685 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cc875221-d66d-43a1-83ab-42059357491d-cert\") pod \"controller-7bb4cc7c98-748tf\" (UID: \"cc875221-d66d-43a1-83ab-42059357491d\") " pod="metallb-system/controller-7bb4cc7c98-748tf" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.402745 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8f2886c8-0371-44b8-b2bc-59dfd3a193f6-memberlist\") pod \"speaker-rw9p8\" (UID: \"8f2886c8-0371-44b8-b2bc-59dfd3a193f6\") " pod="metallb-system/speaker-rw9p8" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.402770 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbqpx\" (UniqueName: \"kubernetes.io/projected/8f2886c8-0371-44b8-b2bc-59dfd3a193f6-kube-api-access-lbqpx\") pod \"speaker-rw9p8\" (UID: \"8f2886c8-0371-44b8-b2bc-59dfd3a193f6\") " pod="metallb-system/speaker-rw9p8" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.402913 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc875221-d66d-43a1-83ab-42059357491d-metrics-certs\") pod \"controller-7bb4cc7c98-748tf\" (UID: \"cc875221-d66d-43a1-83ab-42059357491d\") " pod="metallb-system/controller-7bb4cc7c98-748tf" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.402953 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2f9k\" (UniqueName: \"kubernetes.io/projected/cc875221-d66d-43a1-83ab-42059357491d-kube-api-access-v2f9k\") pod \"controller-7bb4cc7c98-748tf\" (UID: \"cc875221-d66d-43a1-83ab-42059357491d\") " pod="metallb-system/controller-7bb4cc7c98-748tf" Mar 21 04:31:04 crc kubenswrapper[4923]: E0321 04:31:04.403407 4923 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 21 04:31:04 crc kubenswrapper[4923]: E0321 04:31:04.403499 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f2886c8-0371-44b8-b2bc-59dfd3a193f6-memberlist podName:8f2886c8-0371-44b8-b2bc-59dfd3a193f6 nodeName:}" failed. No retries permitted until 2026-03-21 04:31:04.903474036 +0000 UTC m=+830.056485133 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8f2886c8-0371-44b8-b2bc-59dfd3a193f6-memberlist") pod "speaker-rw9p8" (UID: "8f2886c8-0371-44b8-b2bc-59dfd3a193f6") : secret "metallb-memberlist" not found Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.403538 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8f2886c8-0371-44b8-b2bc-59dfd3a193f6-metallb-excludel2\") pod \"speaker-rw9p8\" (UID: \"8f2886c8-0371-44b8-b2bc-59dfd3a193f6\") " pod="metallb-system/speaker-rw9p8" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.405987 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cc875221-d66d-43a1-83ab-42059357491d-cert\") pod \"controller-7bb4cc7c98-748tf\" (UID: \"cc875221-d66d-43a1-83ab-42059357491d\") " pod="metallb-system/controller-7bb4cc7c98-748tf" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.406894 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc875221-d66d-43a1-83ab-42059357491d-metrics-certs\") pod \"controller-7bb4cc7c98-748tf\" (UID: \"cc875221-d66d-43a1-83ab-42059357491d\") " pod="metallb-system/controller-7bb4cc7c98-748tf" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.408808 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-6t644" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.415744 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8f2886c8-0371-44b8-b2bc-59dfd3a193f6-metrics-certs\") pod \"speaker-rw9p8\" (UID: \"8f2886c8-0371-44b8-b2bc-59dfd3a193f6\") " pod="metallb-system/speaker-rw9p8" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.421048 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbqpx\" (UniqueName: \"kubernetes.io/projected/8f2886c8-0371-44b8-b2bc-59dfd3a193f6-kube-api-access-lbqpx\") pod \"speaker-rw9p8\" (UID: \"8f2886c8-0371-44b8-b2bc-59dfd3a193f6\") " pod="metallb-system/speaker-rw9p8" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.421902 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2f9k\" (UniqueName: \"kubernetes.io/projected/cc875221-d66d-43a1-83ab-42059357491d-kube-api-access-v2f9k\") pod \"controller-7bb4cc7c98-748tf\" (UID: \"cc875221-d66d-43a1-83ab-42059357491d\") " pod="metallb-system/controller-7bb4cc7c98-748tf" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.485212 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-748tf" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.566923 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh9q4" event={"ID":"d2f3e6cd-4f13-4fb5-93f9-28359a198749","Type":"ContainerStarted","Data":"f6c0511d1acbb7a05c5d6fefbc9e14eac30e3c15d86d10c9e72287cf35357a7c"} Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.570063 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" event={"ID":"34cdf206-b121-415c-ae40-21245192e724","Type":"ContainerStarted","Data":"861e5e7c19712fc1c95009bcdddeab790be4423ba276affe7ecbcb3c0afdf835"} Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.589466 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nh9q4" podStartSLOduration=2.17448064 podStartE2EDuration="3.589445986s" podCreationTimestamp="2026-03-21 04:31:01 +0000 UTC" firstStartedPulling="2026-03-21 04:31:02.536646981 +0000 UTC m=+827.689658068" lastFinishedPulling="2026-03-21 04:31:03.951612287 +0000 UTC m=+829.104623414" observedRunningTime="2026-03-21 04:31:04.587202561 +0000 UTC m=+829.740213668" watchObservedRunningTime="2026-03-21 04:31:04.589445986 +0000 UTC m=+829.742457073" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.693585 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-748tf"] Mar 21 04:31:04 crc kubenswrapper[4923]: W0321 04:31:04.699830 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc875221_d66d_43a1_83ab_42059357491d.slice/crio-d484afee60a15e33bf902f9fd8058f379f0c9dabca1f58cbe0c041320e63b21e WatchSource:0}: Error finding container d484afee60a15e33bf902f9fd8058f379f0c9dabca1f58cbe0c041320e63b21e: Status 404 returned error can't find the container with id d484afee60a15e33bf902f9fd8058f379f0c9dabca1f58cbe0c041320e63b21e Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.770614 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-6t644"] Mar 21 04:31:04 crc kubenswrapper[4923]: W0321 04:31:04.778639 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9dca852_085f_4e4a_9ded_ffb15aada6cb.slice/crio-d89f9f2f2b6f4139a70c83ce379d09d37c33182cb84b743721ccabf8e13f0dce WatchSource:0}: Error finding container d89f9f2f2b6f4139a70c83ce379d09d37c33182cb84b743721ccabf8e13f0dce: Status 404 returned error can't find the container with id d89f9f2f2b6f4139a70c83ce379d09d37c33182cb84b743721ccabf8e13f0dce Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.807949 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11798d91-f8f0-4ba6-9386-b0876b78d927-metrics-certs\") pod \"frr-k8s-l7zhk\" (UID: \"11798d91-f8f0-4ba6-9386-b0876b78d927\") " pod="metallb-system/frr-k8s-l7zhk" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.813295 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11798d91-f8f0-4ba6-9386-b0876b78d927-metrics-certs\") pod \"frr-k8s-l7zhk\" (UID: \"11798d91-f8f0-4ba6-9386-b0876b78d927\") " pod="metallb-system/frr-k8s-l7zhk" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.908879 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8f2886c8-0371-44b8-b2bc-59dfd3a193f6-memberlist\") pod \"speaker-rw9p8\" (UID: \"8f2886c8-0371-44b8-b2bc-59dfd3a193f6\") " pod="metallb-system/speaker-rw9p8" Mar 21 04:31:04 crc kubenswrapper[4923]: I0321 04:31:04.913604 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8f2886c8-0371-44b8-b2bc-59dfd3a193f6-memberlist\") pod \"speaker-rw9p8\" (UID: \"8f2886c8-0371-44b8-b2bc-59dfd3a193f6\") " pod="metallb-system/speaker-rw9p8" Mar 21 04:31:05 crc kubenswrapper[4923]: I0321 04:31:05.020476 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-l7zhk" Mar 21 04:31:05 crc kubenswrapper[4923]: I0321 04:31:05.068606 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-rw9p8" Mar 21 04:31:05 crc kubenswrapper[4923]: W0321 04:31:05.095759 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f2886c8_0371_44b8_b2bc_59dfd3a193f6.slice/crio-8f0a2532493a46b5bec0abadab03a90a6777a8dca6de8729a8fcf54c9e4ebd9a WatchSource:0}: Error finding container 8f0a2532493a46b5bec0abadab03a90a6777a8dca6de8729a8fcf54c9e4ebd9a: Status 404 returned error can't find the container with id 8f0a2532493a46b5bec0abadab03a90a6777a8dca6de8729a8fcf54c9e4ebd9a Mar 21 04:31:05 crc kubenswrapper[4923]: I0321 04:31:05.586282 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rw9p8" event={"ID":"8f2886c8-0371-44b8-b2bc-59dfd3a193f6","Type":"ContainerStarted","Data":"0638393a8570534a9f9b724ac355ef06174d9fe8c98542749070b7c6046bbe2f"} Mar 21 04:31:05 crc kubenswrapper[4923]: I0321 04:31:05.586780 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rw9p8" event={"ID":"8f2886c8-0371-44b8-b2bc-59dfd3a193f6","Type":"ContainerStarted","Data":"8f0a2532493a46b5bec0abadab03a90a6777a8dca6de8729a8fcf54c9e4ebd9a"} Mar 21 04:31:05 crc kubenswrapper[4923]: I0321 04:31:05.588971 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l7zhk" event={"ID":"11798d91-f8f0-4ba6-9386-b0876b78d927","Type":"ContainerStarted","Data":"0e8b259e9d8bc0a7102c3013b7a0838dcd32c0acb4b17d6258abe0036204e345"} Mar 21 04:31:05 crc kubenswrapper[4923]: I0321 04:31:05.591695 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-6t644" event={"ID":"a9dca852-085f-4e4a-9ded-ffb15aada6cb","Type":"ContainerStarted","Data":"d89f9f2f2b6f4139a70c83ce379d09d37c33182cb84b743721ccabf8e13f0dce"} Mar 21 04:31:05 crc kubenswrapper[4923]: I0321 04:31:05.594038 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-748tf" event={"ID":"cc875221-d66d-43a1-83ab-42059357491d","Type":"ContainerStarted","Data":"3ff135219aef81fe405c2690b857bbd2191bc9219fcab5e3df93322646122e55"} Mar 21 04:31:05 crc kubenswrapper[4923]: I0321 04:31:05.594069 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-748tf" event={"ID":"cc875221-d66d-43a1-83ab-42059357491d","Type":"ContainerStarted","Data":"d484afee60a15e33bf902f9fd8058f379f0c9dabca1f58cbe0c041320e63b21e"} Mar 21 04:31:09 crc kubenswrapper[4923]: I0321 04:31:09.619698 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-748tf" event={"ID":"cc875221-d66d-43a1-83ab-42059357491d","Type":"ContainerStarted","Data":"81aec9231dad518b9be6d71310cb887c3bcdf2a4e787d4890f000931b7f8a3de"} Mar 21 04:31:09 crc kubenswrapper[4923]: I0321 04:31:09.621086 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-748tf" Mar 21 04:31:09 crc kubenswrapper[4923]: I0321 04:31:09.623297 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rw9p8" event={"ID":"8f2886c8-0371-44b8-b2bc-59dfd3a193f6","Type":"ContainerStarted","Data":"4b4be5beb6dc95ffa0930e4ad6c7779517ae6c4fa888f9e487c4368a7b0b6913"} Mar 21 04:31:09 crc kubenswrapper[4923]: I0321 04:31:09.623740 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-rw9p8" Mar 21 04:31:09 crc kubenswrapper[4923]: I0321 04:31:09.641916 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-748tf" podStartSLOduration=1.7119476740000001 podStartE2EDuration="5.64189564s" podCreationTimestamp="2026-03-21 04:31:04 +0000 UTC" firstStartedPulling="2026-03-21 04:31:04.812305486 +0000 UTC m=+829.965316593" lastFinishedPulling="2026-03-21 04:31:08.742253472 +0000 UTC m=+833.895264559" observedRunningTime="2026-03-21 04:31:09.636147923 +0000 UTC m=+834.789159020" watchObservedRunningTime="2026-03-21 04:31:09.64189564 +0000 UTC m=+834.794906727" Mar 21 04:31:09 crc kubenswrapper[4923]: I0321 04:31:09.655449 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-rw9p8" podStartSLOduration=2.271773231 podStartE2EDuration="5.655429312s" podCreationTimestamp="2026-03-21 04:31:04 +0000 UTC" firstStartedPulling="2026-03-21 04:31:05.339546548 +0000 UTC m=+830.492557645" lastFinishedPulling="2026-03-21 04:31:08.723202639 +0000 UTC m=+833.876213726" observedRunningTime="2026-03-21 04:31:09.651305152 +0000 UTC m=+834.804316249" watchObservedRunningTime="2026-03-21 04:31:09.655429312 +0000 UTC m=+834.808440399" Mar 21 04:31:11 crc kubenswrapper[4923]: I0321 04:31:11.921868 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nh9q4" Mar 21 04:31:11 crc kubenswrapper[4923]: I0321 04:31:11.922182 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nh9q4" Mar 21 04:31:11 crc kubenswrapper[4923]: I0321 04:31:11.992430 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nh9q4" Mar 21 04:31:12 crc kubenswrapper[4923]: I0321 04:31:12.642815 4923 generic.go:334] "Generic (PLEG): container finished" podID="11798d91-f8f0-4ba6-9386-b0876b78d927" containerID="bc43bcfb2656fa9225b77f12bc4e9063f99463e690912f88f64cc28b56076cdf" exitCode=0 Mar 21 04:31:12 crc kubenswrapper[4923]: I0321 04:31:12.642935 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l7zhk" event={"ID":"11798d91-f8f0-4ba6-9386-b0876b78d927","Type":"ContainerDied","Data":"bc43bcfb2656fa9225b77f12bc4e9063f99463e690912f88f64cc28b56076cdf"} Mar 21 04:31:12 crc kubenswrapper[4923]: I0321 04:31:12.645217 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-6t644" event={"ID":"a9dca852-085f-4e4a-9ded-ffb15aada6cb","Type":"ContainerStarted","Data":"e175819ea03fb61315b9579866b4325dadef6e8f8be9353cacaf86f51ebc5117"} Mar 21 04:31:12 crc kubenswrapper[4923]: I0321 04:31:12.645403 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-6t644" Mar 21 04:31:12 crc kubenswrapper[4923]: I0321 04:31:12.700604 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-6t644" podStartSLOduration=1.027339159 podStartE2EDuration="8.700585261s" podCreationTimestamp="2026-03-21 04:31:04 +0000 UTC" firstStartedPulling="2026-03-21 04:31:04.780640168 +0000 UTC m=+829.933651255" lastFinishedPulling="2026-03-21 04:31:12.45388626 +0000 UTC m=+837.606897357" observedRunningTime="2026-03-21 04:31:12.696794561 +0000 UTC m=+837.849805648" watchObservedRunningTime="2026-03-21 04:31:12.700585261 +0000 UTC m=+837.853596338" Mar 21 04:31:12 crc kubenswrapper[4923]: I0321 04:31:12.705943 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nh9q4" Mar 21 04:31:12 crc kubenswrapper[4923]: I0321 04:31:12.750570 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nh9q4"] Mar 21 04:31:13 crc kubenswrapper[4923]: I0321 04:31:13.652946 4923 generic.go:334] "Generic (PLEG): container finished" podID="11798d91-f8f0-4ba6-9386-b0876b78d927" containerID="19ef31c1df73aad111a75fa38cc296f25e3cad985594dc8e94cfc6cc57f61980" exitCode=0 Mar 21 04:31:13 crc kubenswrapper[4923]: I0321 04:31:13.653054 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l7zhk" event={"ID":"11798d91-f8f0-4ba6-9386-b0876b78d927","Type":"ContainerDied","Data":"19ef31c1df73aad111a75fa38cc296f25e3cad985594dc8e94cfc6cc57f61980"} Mar 21 04:31:14 crc kubenswrapper[4923]: I0321 04:31:14.491419 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-748tf" Mar 21 04:31:14 crc kubenswrapper[4923]: I0321 04:31:14.660790 4923 generic.go:334] "Generic (PLEG): container finished" podID="11798d91-f8f0-4ba6-9386-b0876b78d927" containerID="972d9da7552db05af5bf6a278a52643dfabc131dd81488c6274d596e1261ec7b" exitCode=0 Mar 21 04:31:14 crc kubenswrapper[4923]: I0321 04:31:14.660892 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l7zhk" event={"ID":"11798d91-f8f0-4ba6-9386-b0876b78d927","Type":"ContainerDied","Data":"972d9da7552db05af5bf6a278a52643dfabc131dd81488c6274d596e1261ec7b"} Mar 21 04:31:14 crc kubenswrapper[4923]: I0321 04:31:14.661293 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nh9q4" podUID="d2f3e6cd-4f13-4fb5-93f9-28359a198749" containerName="registry-server" containerID="cri-o://f6c0511d1acbb7a05c5d6fefbc9e14eac30e3c15d86d10c9e72287cf35357a7c" gracePeriod=2 Mar 21 04:31:15 crc kubenswrapper[4923]: I0321 04:31:15.074980 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-rw9p8" Mar 21 04:31:15 crc kubenswrapper[4923]: I0321 04:31:15.677315 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l7zhk" event={"ID":"11798d91-f8f0-4ba6-9386-b0876b78d927","Type":"ContainerStarted","Data":"beeaac16810378e88cb62103e184515d8a553c75950f3a8d831c6cf3c64e7b49"} Mar 21 04:31:15 crc kubenswrapper[4923]: I0321 04:31:15.680825 4923 generic.go:334] "Generic (PLEG): container finished" podID="d2f3e6cd-4f13-4fb5-93f9-28359a198749" containerID="f6c0511d1acbb7a05c5d6fefbc9e14eac30e3c15d86d10c9e72287cf35357a7c" exitCode=0 Mar 21 04:31:15 crc kubenswrapper[4923]: I0321 04:31:15.680904 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh9q4" event={"ID":"d2f3e6cd-4f13-4fb5-93f9-28359a198749","Type":"ContainerDied","Data":"f6c0511d1acbb7a05c5d6fefbc9e14eac30e3c15d86d10c9e72287cf35357a7c"} Mar 21 04:31:15 crc kubenswrapper[4923]: I0321 04:31:15.888374 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nh9q4" Mar 21 04:31:16 crc kubenswrapper[4923]: I0321 04:31:16.018561 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2f3e6cd-4f13-4fb5-93f9-28359a198749-utilities\") pod \"d2f3e6cd-4f13-4fb5-93f9-28359a198749\" (UID: \"d2f3e6cd-4f13-4fb5-93f9-28359a198749\") " Mar 21 04:31:16 crc kubenswrapper[4923]: I0321 04:31:16.018706 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2f3e6cd-4f13-4fb5-93f9-28359a198749-catalog-content\") pod \"d2f3e6cd-4f13-4fb5-93f9-28359a198749\" (UID: \"d2f3e6cd-4f13-4fb5-93f9-28359a198749\") " Mar 21 04:31:16 crc kubenswrapper[4923]: I0321 04:31:16.018838 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxc9k\" (UniqueName: \"kubernetes.io/projected/d2f3e6cd-4f13-4fb5-93f9-28359a198749-kube-api-access-lxc9k\") pod \"d2f3e6cd-4f13-4fb5-93f9-28359a198749\" (UID: \"d2f3e6cd-4f13-4fb5-93f9-28359a198749\") " Mar 21 04:31:16 crc kubenswrapper[4923]: I0321 04:31:16.019402 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2f3e6cd-4f13-4fb5-93f9-28359a198749-utilities" (OuterVolumeSpecName: "utilities") pod "d2f3e6cd-4f13-4fb5-93f9-28359a198749" (UID: "d2f3e6cd-4f13-4fb5-93f9-28359a198749"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:31:16 crc kubenswrapper[4923]: I0321 04:31:16.032488 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2f3e6cd-4f13-4fb5-93f9-28359a198749-kube-api-access-lxc9k" (OuterVolumeSpecName: "kube-api-access-lxc9k") pod "d2f3e6cd-4f13-4fb5-93f9-28359a198749" (UID: "d2f3e6cd-4f13-4fb5-93f9-28359a198749"). InnerVolumeSpecName "kube-api-access-lxc9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:31:16 crc kubenswrapper[4923]: I0321 04:31:16.091800 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2f3e6cd-4f13-4fb5-93f9-28359a198749-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2f3e6cd-4f13-4fb5-93f9-28359a198749" (UID: "d2f3e6cd-4f13-4fb5-93f9-28359a198749"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:31:16 crc kubenswrapper[4923]: I0321 04:31:16.120011 4923 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2f3e6cd-4f13-4fb5-93f9-28359a198749-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 04:31:16 crc kubenswrapper[4923]: I0321 04:31:16.120032 4923 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2f3e6cd-4f13-4fb5-93f9-28359a198749-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 04:31:16 crc kubenswrapper[4923]: I0321 04:31:16.120042 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxc9k\" (UniqueName: \"kubernetes.io/projected/d2f3e6cd-4f13-4fb5-93f9-28359a198749-kube-api-access-lxc9k\") on node \"crc\" DevicePath \"\"" Mar 21 04:31:16 crc kubenswrapper[4923]: I0321 04:31:16.697053 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l7zhk" event={"ID":"11798d91-f8f0-4ba6-9386-b0876b78d927","Type":"ContainerStarted","Data":"d3dfb5652e9c37cdfaeffbb397955c27607790afe87c16751e73dce9bffe4c0c"} Mar 21 04:31:16 crc kubenswrapper[4923]: I0321 04:31:16.697101 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l7zhk" event={"ID":"11798d91-f8f0-4ba6-9386-b0876b78d927","Type":"ContainerStarted","Data":"0c0a43e446d661f20a2e98fd02fcf24ff6e2e0e869c7627c058591e7e82a5746"} Mar 21 04:31:16 crc kubenswrapper[4923]: I0321 04:31:16.697111 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l7zhk" event={"ID":"11798d91-f8f0-4ba6-9386-b0876b78d927","Type":"ContainerStarted","Data":"f5168f222fbd1c34f0b7e6ea62ca78b62a2b912e6bbbdfac53c267cd3c2b5ae4"} Mar 21 04:31:16 crc kubenswrapper[4923]: I0321 04:31:16.697118 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l7zhk" event={"ID":"11798d91-f8f0-4ba6-9386-b0876b78d927","Type":"ContainerStarted","Data":"b6ff6dd3b0be3a8571a2b93d5c942789b2a44b213e126c168bb85df3929f4657"} Mar 21 04:31:16 crc kubenswrapper[4923]: I0321 04:31:16.697126 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l7zhk" event={"ID":"11798d91-f8f0-4ba6-9386-b0876b78d927","Type":"ContainerStarted","Data":"a2b5b75365175cdd16564850ce0e2ab0f808d9f8d6d4fc8d0da130a37989f086"} Mar 21 04:31:16 crc kubenswrapper[4923]: I0321 04:31:16.698147 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-l7zhk" Mar 21 04:31:16 crc kubenswrapper[4923]: I0321 04:31:16.702396 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh9q4" event={"ID":"d2f3e6cd-4f13-4fb5-93f9-28359a198749","Type":"ContainerDied","Data":"892ce841fa2d3430e8f69be8cfa80bb0f86f41485d4b0068a6072cfc57de9892"} Mar 21 04:31:16 crc kubenswrapper[4923]: I0321 04:31:16.702443 4923 scope.go:117] "RemoveContainer" containerID="f6c0511d1acbb7a05c5d6fefbc9e14eac30e3c15d86d10c9e72287cf35357a7c" Mar 21 04:31:16 crc kubenswrapper[4923]: I0321 04:31:16.702517 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nh9q4" Mar 21 04:31:16 crc kubenswrapper[4923]: I0321 04:31:16.725541 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-l7zhk" podStartSLOduration=5.437730742 podStartE2EDuration="12.725524311s" podCreationTimestamp="2026-03-21 04:31:04 +0000 UTC" firstStartedPulling="2026-03-21 04:31:05.127575914 +0000 UTC m=+830.280587001" lastFinishedPulling="2026-03-21 04:31:12.415369443 +0000 UTC m=+837.568380570" observedRunningTime="2026-03-21 04:31:16.723367748 +0000 UTC m=+841.876378845" watchObservedRunningTime="2026-03-21 04:31:16.725524311 +0000 UTC m=+841.878535398" Mar 21 04:31:16 crc kubenswrapper[4923]: I0321 04:31:16.730922 4923 scope.go:117] "RemoveContainer" containerID="a0f2ef08c546a6814fcbe30e514b065d8837de243547b54fc0c929b02e36c2b2" Mar 21 04:31:16 crc kubenswrapper[4923]: I0321 04:31:16.740994 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nh9q4"] Mar 21 04:31:16 crc kubenswrapper[4923]: I0321 04:31:16.747163 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nh9q4"] Mar 21 04:31:16 crc kubenswrapper[4923]: I0321 04:31:16.759483 4923 scope.go:117] "RemoveContainer" containerID="86197f5c1f590dd78a3d859ceceb703a539a0e9d96296fe9bf37d75bf57ad7c2" Mar 21 04:31:17 crc kubenswrapper[4923]: I0321 04:31:17.146471 4923 scope.go:117] "RemoveContainer" containerID="7d0f4682a23456b75849130895040460941e184ed48c9db2d2422bda5ee4f0f8" Mar 21 04:31:18 crc kubenswrapper[4923]: I0321 04:31:18.366059 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2f3e6cd-4f13-4fb5-93f9-28359a198749" path="/var/lib/kubelet/pods/d2f3e6cd-4f13-4fb5-93f9-28359a198749/volumes" Mar 21 04:31:20 crc kubenswrapper[4923]: I0321 04:31:20.021373 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-l7zhk" Mar 21 04:31:20 crc kubenswrapper[4923]: I0321 04:31:20.087284 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-l7zhk" Mar 21 04:31:21 crc kubenswrapper[4923]: I0321 04:31:21.054666 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-g459f"] Mar 21 04:31:21 crc kubenswrapper[4923]: E0321 04:31:21.055077 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2f3e6cd-4f13-4fb5-93f9-28359a198749" containerName="extract-utilities" Mar 21 04:31:21 crc kubenswrapper[4923]: I0321 04:31:21.055107 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2f3e6cd-4f13-4fb5-93f9-28359a198749" containerName="extract-utilities" Mar 21 04:31:21 crc kubenswrapper[4923]: E0321 04:31:21.055135 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2f3e6cd-4f13-4fb5-93f9-28359a198749" containerName="extract-content" Mar 21 04:31:21 crc kubenswrapper[4923]: I0321 04:31:21.055152 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2f3e6cd-4f13-4fb5-93f9-28359a198749" containerName="extract-content" Mar 21 04:31:21 crc kubenswrapper[4923]: E0321 04:31:21.055174 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2f3e6cd-4f13-4fb5-93f9-28359a198749" containerName="registry-server" Mar 21 04:31:21 crc kubenswrapper[4923]: I0321 04:31:21.055191 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2f3e6cd-4f13-4fb5-93f9-28359a198749" containerName="registry-server" Mar 21 04:31:21 crc kubenswrapper[4923]: I0321 04:31:21.055526 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2f3e6cd-4f13-4fb5-93f9-28359a198749" containerName="registry-server" Mar 21 04:31:21 crc kubenswrapper[4923]: I0321 04:31:21.056302 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-g459f" Mar 21 04:31:21 crc kubenswrapper[4923]: I0321 04:31:21.058762 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-index-dockercfg-p74lq" Mar 21 04:31:21 crc kubenswrapper[4923]: I0321 04:31:21.059587 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 21 04:31:21 crc kubenswrapper[4923]: I0321 04:31:21.060027 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 21 04:31:21 crc kubenswrapper[4923]: I0321 04:31:21.073982 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-g459f"] Mar 21 04:31:21 crc kubenswrapper[4923]: I0321 04:31:21.206534 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6qwp\" (UniqueName: \"kubernetes.io/projected/c6ce98cc-c881-4e06-b16f-d4e3974f6b13-kube-api-access-k6qwp\") pod \"mariadb-operator-index-g459f\" (UID: \"c6ce98cc-c881-4e06-b16f-d4e3974f6b13\") " pod="openstack-operators/mariadb-operator-index-g459f" Mar 21 04:31:21 crc kubenswrapper[4923]: I0321 04:31:21.308252 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6qwp\" (UniqueName: \"kubernetes.io/projected/c6ce98cc-c881-4e06-b16f-d4e3974f6b13-kube-api-access-k6qwp\") pod \"mariadb-operator-index-g459f\" (UID: \"c6ce98cc-c881-4e06-b16f-d4e3974f6b13\") " pod="openstack-operators/mariadb-operator-index-g459f" Mar 21 04:31:21 crc kubenswrapper[4923]: I0321 04:31:21.326982 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6qwp\" (UniqueName: \"kubernetes.io/projected/c6ce98cc-c881-4e06-b16f-d4e3974f6b13-kube-api-access-k6qwp\") pod \"mariadb-operator-index-g459f\" (UID: \"c6ce98cc-c881-4e06-b16f-d4e3974f6b13\") " pod="openstack-operators/mariadb-operator-index-g459f" Mar 21 04:31:21 crc kubenswrapper[4923]: I0321 04:31:21.384503 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-g459f" Mar 21 04:31:21 crc kubenswrapper[4923]: I0321 04:31:21.655236 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-g459f"] Mar 21 04:31:21 crc kubenswrapper[4923]: I0321 04:31:21.748353 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-g459f" event={"ID":"c6ce98cc-c881-4e06-b16f-d4e3974f6b13","Type":"ContainerStarted","Data":"0a2f6d86e82e8d99c8091a7a5f7a32828aa18a3755ed554c9d4aa510113b1957"} Mar 21 04:31:22 crc kubenswrapper[4923]: I0321 04:31:22.760585 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-g459f" event={"ID":"c6ce98cc-c881-4e06-b16f-d4e3974f6b13","Type":"ContainerStarted","Data":"654bcfa59c364346df4233fa25ce1ca4335bcc6a5711c06b41e144c2fd11913d"} Mar 21 04:31:24 crc kubenswrapper[4923]: I0321 04:31:24.229109 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-g459f" podStartSLOduration=2.45063706 podStartE2EDuration="3.229070774s" podCreationTimestamp="2026-03-21 04:31:21 +0000 UTC" firstStartedPulling="2026-03-21 04:31:21.664120125 +0000 UTC m=+846.817131222" lastFinishedPulling="2026-03-21 04:31:22.442553849 +0000 UTC m=+847.595564936" observedRunningTime="2026-03-21 04:31:22.782999698 +0000 UTC m=+847.936010785" watchObservedRunningTime="2026-03-21 04:31:24.229070774 +0000 UTC m=+849.382081891" Mar 21 04:31:24 crc kubenswrapper[4923]: I0321 04:31:24.231763 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-g459f"] Mar 21 04:31:24 crc kubenswrapper[4923]: I0321 04:31:24.420294 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-6t644" Mar 21 04:31:24 crc kubenswrapper[4923]: I0321 04:31:24.776672 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-g459f" podUID="c6ce98cc-c881-4e06-b16f-d4e3974f6b13" containerName="registry-server" containerID="cri-o://654bcfa59c364346df4233fa25ce1ca4335bcc6a5711c06b41e144c2fd11913d" gracePeriod=2 Mar 21 04:31:24 crc kubenswrapper[4923]: I0321 04:31:24.833837 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-nxh74"] Mar 21 04:31:24 crc kubenswrapper[4923]: I0321 04:31:24.835676 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-nxh74" Mar 21 04:31:24 crc kubenswrapper[4923]: I0321 04:31:24.844125 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-nxh74"] Mar 21 04:31:24 crc kubenswrapper[4923]: I0321 04:31:24.906296 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dr7w\" (UniqueName: \"kubernetes.io/projected/56654110-f2bb-459d-abca-fc6b10f769b4-kube-api-access-7dr7w\") pod \"mariadb-operator-index-nxh74\" (UID: \"56654110-f2bb-459d-abca-fc6b10f769b4\") " pod="openstack-operators/mariadb-operator-index-nxh74" Mar 21 04:31:25 crc kubenswrapper[4923]: I0321 04:31:25.007837 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dr7w\" (UniqueName: \"kubernetes.io/projected/56654110-f2bb-459d-abca-fc6b10f769b4-kube-api-access-7dr7w\") pod \"mariadb-operator-index-nxh74\" (UID: \"56654110-f2bb-459d-abca-fc6b10f769b4\") " pod="openstack-operators/mariadb-operator-index-nxh74" Mar 21 04:31:25 crc kubenswrapper[4923]: I0321 04:31:25.023301 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-l7zhk" Mar 21 04:31:25 crc kubenswrapper[4923]: I0321 04:31:25.076381 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dr7w\" (UniqueName: \"kubernetes.io/projected/56654110-f2bb-459d-abca-fc6b10f769b4-kube-api-access-7dr7w\") pod \"mariadb-operator-index-nxh74\" (UID: \"56654110-f2bb-459d-abca-fc6b10f769b4\") " pod="openstack-operators/mariadb-operator-index-nxh74" Mar 21 04:31:25 crc kubenswrapper[4923]: I0321 04:31:25.180405 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-nxh74" Mar 21 04:31:25 crc kubenswrapper[4923]: I0321 04:31:25.242309 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-g459f" Mar 21 04:31:25 crc kubenswrapper[4923]: I0321 04:31:25.413647 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6qwp\" (UniqueName: \"kubernetes.io/projected/c6ce98cc-c881-4e06-b16f-d4e3974f6b13-kube-api-access-k6qwp\") pod \"c6ce98cc-c881-4e06-b16f-d4e3974f6b13\" (UID: \"c6ce98cc-c881-4e06-b16f-d4e3974f6b13\") " Mar 21 04:31:25 crc kubenswrapper[4923]: I0321 04:31:25.420214 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6ce98cc-c881-4e06-b16f-d4e3974f6b13-kube-api-access-k6qwp" (OuterVolumeSpecName: "kube-api-access-k6qwp") pod "c6ce98cc-c881-4e06-b16f-d4e3974f6b13" (UID: "c6ce98cc-c881-4e06-b16f-d4e3974f6b13"). InnerVolumeSpecName "kube-api-access-k6qwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:31:25 crc kubenswrapper[4923]: I0321 04:31:25.434165 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-nxh74"] Mar 21 04:31:25 crc kubenswrapper[4923]: W0321 04:31:25.443191 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56654110_f2bb_459d_abca_fc6b10f769b4.slice/crio-2b7ea746f1c71e2288d2cbebf4f0ff8f61b7cf41ffb98bd28b0119d06f2d35ea WatchSource:0}: Error finding container 2b7ea746f1c71e2288d2cbebf4f0ff8f61b7cf41ffb98bd28b0119d06f2d35ea: Status 404 returned error can't find the container with id 2b7ea746f1c71e2288d2cbebf4f0ff8f61b7cf41ffb98bd28b0119d06f2d35ea Mar 21 04:31:25 crc kubenswrapper[4923]: I0321 04:31:25.515926 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6qwp\" (UniqueName: \"kubernetes.io/projected/c6ce98cc-c881-4e06-b16f-d4e3974f6b13-kube-api-access-k6qwp\") on node \"crc\" DevicePath \"\"" Mar 21 04:31:25 crc kubenswrapper[4923]: I0321 04:31:25.783280 4923 generic.go:334] "Generic (PLEG): container finished" podID="c6ce98cc-c881-4e06-b16f-d4e3974f6b13" containerID="654bcfa59c364346df4233fa25ce1ca4335bcc6a5711c06b41e144c2fd11913d" exitCode=0 Mar 21 04:31:25 crc kubenswrapper[4923]: I0321 04:31:25.783344 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-g459f" Mar 21 04:31:25 crc kubenswrapper[4923]: I0321 04:31:25.783366 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-g459f" event={"ID":"c6ce98cc-c881-4e06-b16f-d4e3974f6b13","Type":"ContainerDied","Data":"654bcfa59c364346df4233fa25ce1ca4335bcc6a5711c06b41e144c2fd11913d"} Mar 21 04:31:25 crc kubenswrapper[4923]: I0321 04:31:25.783420 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-g459f" event={"ID":"c6ce98cc-c881-4e06-b16f-d4e3974f6b13","Type":"ContainerDied","Data":"0a2f6d86e82e8d99c8091a7a5f7a32828aa18a3755ed554c9d4aa510113b1957"} Mar 21 04:31:25 crc kubenswrapper[4923]: I0321 04:31:25.783441 4923 scope.go:117] "RemoveContainer" containerID="654bcfa59c364346df4233fa25ce1ca4335bcc6a5711c06b41e144c2fd11913d" Mar 21 04:31:25 crc kubenswrapper[4923]: I0321 04:31:25.784477 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-nxh74" event={"ID":"56654110-f2bb-459d-abca-fc6b10f769b4","Type":"ContainerStarted","Data":"2b7ea746f1c71e2288d2cbebf4f0ff8f61b7cf41ffb98bd28b0119d06f2d35ea"} Mar 21 04:31:25 crc kubenswrapper[4923]: I0321 04:31:25.796920 4923 scope.go:117] "RemoveContainer" containerID="654bcfa59c364346df4233fa25ce1ca4335bcc6a5711c06b41e144c2fd11913d" Mar 21 04:31:25 crc kubenswrapper[4923]: E0321 04:31:25.797407 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"654bcfa59c364346df4233fa25ce1ca4335bcc6a5711c06b41e144c2fd11913d\": container with ID starting with 654bcfa59c364346df4233fa25ce1ca4335bcc6a5711c06b41e144c2fd11913d not found: ID does not exist" containerID="654bcfa59c364346df4233fa25ce1ca4335bcc6a5711c06b41e144c2fd11913d" Mar 21 04:31:25 crc kubenswrapper[4923]: I0321 04:31:25.797455 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"654bcfa59c364346df4233fa25ce1ca4335bcc6a5711c06b41e144c2fd11913d"} err="failed to get container status \"654bcfa59c364346df4233fa25ce1ca4335bcc6a5711c06b41e144c2fd11913d\": rpc error: code = NotFound desc = could not find container \"654bcfa59c364346df4233fa25ce1ca4335bcc6a5711c06b41e144c2fd11913d\": container with ID starting with 654bcfa59c364346df4233fa25ce1ca4335bcc6a5711c06b41e144c2fd11913d not found: ID does not exist" Mar 21 04:31:25 crc kubenswrapper[4923]: I0321 04:31:25.805036 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-g459f"] Mar 21 04:31:25 crc kubenswrapper[4923]: I0321 04:31:25.808648 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-g459f"] Mar 21 04:31:26 crc kubenswrapper[4923]: I0321 04:31:26.372124 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6ce98cc-c881-4e06-b16f-d4e3974f6b13" path="/var/lib/kubelet/pods/c6ce98cc-c881-4e06-b16f-d4e3974f6b13/volumes" Mar 21 04:31:26 crc kubenswrapper[4923]: I0321 04:31:26.790444 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-nxh74" event={"ID":"56654110-f2bb-459d-abca-fc6b10f769b4","Type":"ContainerStarted","Data":"78d1c5e71dda6cef5f1b266eb8f7638cc8b08514533c426ac7a7a8cf52235de0"} Mar 21 04:31:26 crc kubenswrapper[4923]: I0321 04:31:26.807642 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-nxh74" podStartSLOduration=2.407798578 podStartE2EDuration="2.807621898s" podCreationTimestamp="2026-03-21 04:31:24 +0000 UTC" firstStartedPulling="2026-03-21 04:31:25.449055478 +0000 UTC m=+850.602066565" lastFinishedPulling="2026-03-21 04:31:25.848878798 +0000 UTC m=+851.001889885" observedRunningTime="2026-03-21 04:31:26.803874379 +0000 UTC m=+851.956885466" watchObservedRunningTime="2026-03-21 04:31:26.807621898 +0000 UTC m=+851.960632985" Mar 21 04:31:35 crc kubenswrapper[4923]: I0321 04:31:35.181240 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-index-nxh74" Mar 21 04:31:35 crc kubenswrapper[4923]: I0321 04:31:35.182010 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/mariadb-operator-index-nxh74" Mar 21 04:31:35 crc kubenswrapper[4923]: I0321 04:31:35.226662 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/mariadb-operator-index-nxh74" Mar 21 04:31:35 crc kubenswrapper[4923]: I0321 04:31:35.886989 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-index-nxh74" Mar 21 04:31:39 crc kubenswrapper[4923]: I0321 04:31:39.241306 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fmp5j"] Mar 21 04:31:39 crc kubenswrapper[4923]: E0321 04:31:39.242018 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6ce98cc-c881-4e06-b16f-d4e3974f6b13" containerName="registry-server" Mar 21 04:31:39 crc kubenswrapper[4923]: I0321 04:31:39.242041 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ce98cc-c881-4e06-b16f-d4e3974f6b13" containerName="registry-server" Mar 21 04:31:39 crc kubenswrapper[4923]: I0321 04:31:39.242557 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6ce98cc-c881-4e06-b16f-d4e3974f6b13" containerName="registry-server" Mar 21 04:31:39 crc kubenswrapper[4923]: I0321 04:31:39.243924 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fmp5j" Mar 21 04:31:39 crc kubenswrapper[4923]: I0321 04:31:39.274577 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fmp5j"] Mar 21 04:31:39 crc kubenswrapper[4923]: I0321 04:31:39.417898 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d94c71a6-f46e-4da4-90bf-cccd74f3d13e-utilities\") pod \"redhat-marketplace-fmp5j\" (UID: \"d94c71a6-f46e-4da4-90bf-cccd74f3d13e\") " pod="openshift-marketplace/redhat-marketplace-fmp5j" Mar 21 04:31:39 crc kubenswrapper[4923]: I0321 04:31:39.417976 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d94c71a6-f46e-4da4-90bf-cccd74f3d13e-catalog-content\") pod \"redhat-marketplace-fmp5j\" (UID: \"d94c71a6-f46e-4da4-90bf-cccd74f3d13e\") " pod="openshift-marketplace/redhat-marketplace-fmp5j" Mar 21 04:31:39 crc kubenswrapper[4923]: I0321 04:31:39.418043 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7sl2\" (UniqueName: \"kubernetes.io/projected/d94c71a6-f46e-4da4-90bf-cccd74f3d13e-kube-api-access-z7sl2\") pod \"redhat-marketplace-fmp5j\" (UID: \"d94c71a6-f46e-4da4-90bf-cccd74f3d13e\") " pod="openshift-marketplace/redhat-marketplace-fmp5j" Mar 21 04:31:39 crc kubenswrapper[4923]: I0321 04:31:39.518988 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7sl2\" (UniqueName: \"kubernetes.io/projected/d94c71a6-f46e-4da4-90bf-cccd74f3d13e-kube-api-access-z7sl2\") pod \"redhat-marketplace-fmp5j\" (UID: \"d94c71a6-f46e-4da4-90bf-cccd74f3d13e\") " pod="openshift-marketplace/redhat-marketplace-fmp5j" Mar 21 04:31:39 crc kubenswrapper[4923]: I0321 04:31:39.519491 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d94c71a6-f46e-4da4-90bf-cccd74f3d13e-utilities\") pod \"redhat-marketplace-fmp5j\" (UID: \"d94c71a6-f46e-4da4-90bf-cccd74f3d13e\") " pod="openshift-marketplace/redhat-marketplace-fmp5j" Mar 21 04:31:39 crc kubenswrapper[4923]: I0321 04:31:39.519723 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d94c71a6-f46e-4da4-90bf-cccd74f3d13e-catalog-content\") pod \"redhat-marketplace-fmp5j\" (UID: \"d94c71a6-f46e-4da4-90bf-cccd74f3d13e\") " pod="openshift-marketplace/redhat-marketplace-fmp5j" Mar 21 04:31:39 crc kubenswrapper[4923]: I0321 04:31:39.520004 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d94c71a6-f46e-4da4-90bf-cccd74f3d13e-utilities\") pod \"redhat-marketplace-fmp5j\" (UID: \"d94c71a6-f46e-4da4-90bf-cccd74f3d13e\") " pod="openshift-marketplace/redhat-marketplace-fmp5j" Mar 21 04:31:39 crc kubenswrapper[4923]: I0321 04:31:39.520680 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d94c71a6-f46e-4da4-90bf-cccd74f3d13e-catalog-content\") pod \"redhat-marketplace-fmp5j\" (UID: \"d94c71a6-f46e-4da4-90bf-cccd74f3d13e\") " pod="openshift-marketplace/redhat-marketplace-fmp5j" Mar 21 04:31:39 crc kubenswrapper[4923]: I0321 04:31:39.551102 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7sl2\" (UniqueName: \"kubernetes.io/projected/d94c71a6-f46e-4da4-90bf-cccd74f3d13e-kube-api-access-z7sl2\") pod \"redhat-marketplace-fmp5j\" (UID: \"d94c71a6-f46e-4da4-90bf-cccd74f3d13e\") " pod="openshift-marketplace/redhat-marketplace-fmp5j" Mar 21 04:31:39 crc kubenswrapper[4923]: I0321 04:31:39.577961 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fmp5j" Mar 21 04:31:39 crc kubenswrapper[4923]: I0321 04:31:39.829060 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fmp5j"] Mar 21 04:31:39 crc kubenswrapper[4923]: I0321 04:31:39.889369 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmp5j" event={"ID":"d94c71a6-f46e-4da4-90bf-cccd74f3d13e","Type":"ContainerStarted","Data":"6542acad1814433fa4243e427945bf7d328bc0aa37dadf235ab90693d2b6a02d"} Mar 21 04:31:40 crc kubenswrapper[4923]: I0321 04:31:40.899524 4923 generic.go:334] "Generic (PLEG): container finished" podID="d94c71a6-f46e-4da4-90bf-cccd74f3d13e" containerID="6bc7675be32b2146f04b1b83e6a96b8ac4bf3e97a0c0cd6ac2e82bd74f5e4b5d" exitCode=0 Mar 21 04:31:40 crc kubenswrapper[4923]: I0321 04:31:40.899650 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmp5j" event={"ID":"d94c71a6-f46e-4da4-90bf-cccd74f3d13e","Type":"ContainerDied","Data":"6bc7675be32b2146f04b1b83e6a96b8ac4bf3e97a0c0cd6ac2e82bd74f5e4b5d"} Mar 21 04:31:41 crc kubenswrapper[4923]: I0321 04:31:41.886201 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6"] Mar 21 04:31:41 crc kubenswrapper[4923]: I0321 04:31:41.888067 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6" Mar 21 04:31:41 crc kubenswrapper[4923]: I0321 04:31:41.891107 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-vcb7s" Mar 21 04:31:41 crc kubenswrapper[4923]: I0321 04:31:41.900999 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6"] Mar 21 04:31:41 crc kubenswrapper[4923]: I0321 04:31:41.909074 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmp5j" event={"ID":"d94c71a6-f46e-4da4-90bf-cccd74f3d13e","Type":"ContainerStarted","Data":"939edf78d8836c3da73612d0c96639f3da2a7b868f5c74e6cebbcec2f3235878"} Mar 21 04:31:41 crc kubenswrapper[4923]: I0321 04:31:41.956157 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/210b685d-8b03-4ccd-9b23-c0231434354c-bundle\") pod \"7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6\" (UID: \"210b685d-8b03-4ccd-9b23-c0231434354c\") " pod="openstack-operators/7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6" Mar 21 04:31:41 crc kubenswrapper[4923]: I0321 04:31:41.956212 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/210b685d-8b03-4ccd-9b23-c0231434354c-util\") pod \"7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6\" (UID: \"210b685d-8b03-4ccd-9b23-c0231434354c\") " pod="openstack-operators/7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6" Mar 21 04:31:41 crc kubenswrapper[4923]: I0321 04:31:41.956242 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj6lk\" (UniqueName: \"kubernetes.io/projected/210b685d-8b03-4ccd-9b23-c0231434354c-kube-api-access-wj6lk\") pod \"7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6\" (UID: \"210b685d-8b03-4ccd-9b23-c0231434354c\") " pod="openstack-operators/7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6" Mar 21 04:31:42 crc kubenswrapper[4923]: I0321 04:31:42.058119 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/210b685d-8b03-4ccd-9b23-c0231434354c-bundle\") pod \"7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6\" (UID: \"210b685d-8b03-4ccd-9b23-c0231434354c\") " pod="openstack-operators/7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6" Mar 21 04:31:42 crc kubenswrapper[4923]: I0321 04:31:42.058183 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/210b685d-8b03-4ccd-9b23-c0231434354c-util\") pod \"7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6\" (UID: \"210b685d-8b03-4ccd-9b23-c0231434354c\") " pod="openstack-operators/7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6" Mar 21 04:31:42 crc kubenswrapper[4923]: I0321 04:31:42.058212 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj6lk\" (UniqueName: \"kubernetes.io/projected/210b685d-8b03-4ccd-9b23-c0231434354c-kube-api-access-wj6lk\") pod \"7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6\" (UID: \"210b685d-8b03-4ccd-9b23-c0231434354c\") " pod="openstack-operators/7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6" Mar 21 04:31:42 crc kubenswrapper[4923]: I0321 04:31:42.058980 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/210b685d-8b03-4ccd-9b23-c0231434354c-bundle\") pod \"7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6\" (UID: \"210b685d-8b03-4ccd-9b23-c0231434354c\") " pod="openstack-operators/7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6" Mar 21 04:31:42 crc kubenswrapper[4923]: I0321 04:31:42.059054 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/210b685d-8b03-4ccd-9b23-c0231434354c-util\") pod \"7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6\" (UID: \"210b685d-8b03-4ccd-9b23-c0231434354c\") " pod="openstack-operators/7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6" Mar 21 04:31:42 crc kubenswrapper[4923]: I0321 04:31:42.088799 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj6lk\" (UniqueName: \"kubernetes.io/projected/210b685d-8b03-4ccd-9b23-c0231434354c-kube-api-access-wj6lk\") pod \"7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6\" (UID: \"210b685d-8b03-4ccd-9b23-c0231434354c\") " pod="openstack-operators/7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6" Mar 21 04:31:42 crc kubenswrapper[4923]: I0321 04:31:42.218691 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6" Mar 21 04:31:42 crc kubenswrapper[4923]: I0321 04:31:42.488254 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6"] Mar 21 04:31:42 crc kubenswrapper[4923]: W0321 04:31:42.499062 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod210b685d_8b03_4ccd_9b23_c0231434354c.slice/crio-6b417436ad197a49fe6f1b22950fc59af71d4bf300a01d9820f394fa5c37366f WatchSource:0}: Error finding container 6b417436ad197a49fe6f1b22950fc59af71d4bf300a01d9820f394fa5c37366f: Status 404 returned error can't find the container with id 6b417436ad197a49fe6f1b22950fc59af71d4bf300a01d9820f394fa5c37366f Mar 21 04:31:42 crc kubenswrapper[4923]: I0321 04:31:42.915381 4923 generic.go:334] "Generic (PLEG): container finished" podID="210b685d-8b03-4ccd-9b23-c0231434354c" containerID="6884b983e5c49af9978a5e9defc499f455a479cb072f1d8185d0ffbd3564f9d9" exitCode=0 Mar 21 04:31:42 crc kubenswrapper[4923]: I0321 04:31:42.915484 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6" event={"ID":"210b685d-8b03-4ccd-9b23-c0231434354c","Type":"ContainerDied","Data":"6884b983e5c49af9978a5e9defc499f455a479cb072f1d8185d0ffbd3564f9d9"} Mar 21 04:31:42 crc kubenswrapper[4923]: I0321 04:31:42.915545 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6" event={"ID":"210b685d-8b03-4ccd-9b23-c0231434354c","Type":"ContainerStarted","Data":"6b417436ad197a49fe6f1b22950fc59af71d4bf300a01d9820f394fa5c37366f"} Mar 21 04:31:42 crc kubenswrapper[4923]: I0321 04:31:42.917851 4923 generic.go:334] "Generic (PLEG): container finished" podID="d94c71a6-f46e-4da4-90bf-cccd74f3d13e" containerID="939edf78d8836c3da73612d0c96639f3da2a7b868f5c74e6cebbcec2f3235878" exitCode=0 Mar 21 04:31:42 crc kubenswrapper[4923]: I0321 04:31:42.917889 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmp5j" event={"ID":"d94c71a6-f46e-4da4-90bf-cccd74f3d13e","Type":"ContainerDied","Data":"939edf78d8836c3da73612d0c96639f3da2a7b868f5c74e6cebbcec2f3235878"} Mar 21 04:31:43 crc kubenswrapper[4923]: I0321 04:31:43.931784 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmp5j" event={"ID":"d94c71a6-f46e-4da4-90bf-cccd74f3d13e","Type":"ContainerStarted","Data":"b9f91bb2bc10074e220d338487d6d3c7a8cf1b67dfcf24c533cee560a44dd776"} Mar 21 04:31:43 crc kubenswrapper[4923]: I0321 04:31:43.937366 4923 generic.go:334] "Generic (PLEG): container finished" podID="210b685d-8b03-4ccd-9b23-c0231434354c" containerID="979b0a1c5b8cfce0c72d45862299ab9988c8912b219cc65e64713d60a716036e" exitCode=0 Mar 21 04:31:43 crc kubenswrapper[4923]: I0321 04:31:43.937425 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6" event={"ID":"210b685d-8b03-4ccd-9b23-c0231434354c","Type":"ContainerDied","Data":"979b0a1c5b8cfce0c72d45862299ab9988c8912b219cc65e64713d60a716036e"} Mar 21 04:31:43 crc kubenswrapper[4923]: I0321 04:31:43.976949 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fmp5j" podStartSLOduration=2.538714986 podStartE2EDuration="4.976921653s" podCreationTimestamp="2026-03-21 04:31:39 +0000 UTC" firstStartedPulling="2026-03-21 04:31:40.902241317 +0000 UTC m=+866.055252434" lastFinishedPulling="2026-03-21 04:31:43.340447994 +0000 UTC m=+868.493459101" observedRunningTime="2026-03-21 04:31:43.96992449 +0000 UTC m=+869.122935637" watchObservedRunningTime="2026-03-21 04:31:43.976921653 +0000 UTC m=+869.129932780" Mar 21 04:31:44 crc kubenswrapper[4923]: I0321 04:31:44.945384 4923 generic.go:334] "Generic (PLEG): container finished" podID="210b685d-8b03-4ccd-9b23-c0231434354c" containerID="de37a24899fd452700acf6258fe8ed6ec020bc4e6869eae2fbe6d287be98bb3f" exitCode=0 Mar 21 04:31:44 crc kubenswrapper[4923]: I0321 04:31:44.945441 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6" event={"ID":"210b685d-8b03-4ccd-9b23-c0231434354c","Type":"ContainerDied","Data":"de37a24899fd452700acf6258fe8ed6ec020bc4e6869eae2fbe6d287be98bb3f"} Mar 21 04:31:46 crc kubenswrapper[4923]: I0321 04:31:46.316926 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6" Mar 21 04:31:46 crc kubenswrapper[4923]: I0321 04:31:46.429547 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/210b685d-8b03-4ccd-9b23-c0231434354c-bundle\") pod \"210b685d-8b03-4ccd-9b23-c0231434354c\" (UID: \"210b685d-8b03-4ccd-9b23-c0231434354c\") " Mar 21 04:31:46 crc kubenswrapper[4923]: I0321 04:31:46.429708 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/210b685d-8b03-4ccd-9b23-c0231434354c-util\") pod \"210b685d-8b03-4ccd-9b23-c0231434354c\" (UID: \"210b685d-8b03-4ccd-9b23-c0231434354c\") " Mar 21 04:31:46 crc kubenswrapper[4923]: I0321 04:31:46.429759 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj6lk\" (UniqueName: \"kubernetes.io/projected/210b685d-8b03-4ccd-9b23-c0231434354c-kube-api-access-wj6lk\") pod \"210b685d-8b03-4ccd-9b23-c0231434354c\" (UID: \"210b685d-8b03-4ccd-9b23-c0231434354c\") " Mar 21 04:31:46 crc kubenswrapper[4923]: I0321 04:31:46.431389 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/210b685d-8b03-4ccd-9b23-c0231434354c-bundle" (OuterVolumeSpecName: "bundle") pod "210b685d-8b03-4ccd-9b23-c0231434354c" (UID: "210b685d-8b03-4ccd-9b23-c0231434354c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:31:46 crc kubenswrapper[4923]: I0321 04:31:46.434946 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210b685d-8b03-4ccd-9b23-c0231434354c-kube-api-access-wj6lk" (OuterVolumeSpecName: "kube-api-access-wj6lk") pod "210b685d-8b03-4ccd-9b23-c0231434354c" (UID: "210b685d-8b03-4ccd-9b23-c0231434354c"). InnerVolumeSpecName "kube-api-access-wj6lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:31:46 crc kubenswrapper[4923]: I0321 04:31:46.467381 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/210b685d-8b03-4ccd-9b23-c0231434354c-util" (OuterVolumeSpecName: "util") pod "210b685d-8b03-4ccd-9b23-c0231434354c" (UID: "210b685d-8b03-4ccd-9b23-c0231434354c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:31:46 crc kubenswrapper[4923]: I0321 04:31:46.531160 4923 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/210b685d-8b03-4ccd-9b23-c0231434354c-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 04:31:46 crc kubenswrapper[4923]: I0321 04:31:46.531196 4923 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/210b685d-8b03-4ccd-9b23-c0231434354c-util\") on node \"crc\" DevicePath \"\"" Mar 21 04:31:46 crc kubenswrapper[4923]: I0321 04:31:46.531210 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj6lk\" (UniqueName: \"kubernetes.io/projected/210b685d-8b03-4ccd-9b23-c0231434354c-kube-api-access-wj6lk\") on node \"crc\" DevicePath \"\"" Mar 21 04:31:46 crc kubenswrapper[4923]: I0321 04:31:46.965316 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6" event={"ID":"210b685d-8b03-4ccd-9b23-c0231434354c","Type":"ContainerDied","Data":"6b417436ad197a49fe6f1b22950fc59af71d4bf300a01d9820f394fa5c37366f"} Mar 21 04:31:46 crc kubenswrapper[4923]: I0321 04:31:46.965407 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b417436ad197a49fe6f1b22950fc59af71d4bf300a01d9820f394fa5c37366f" Mar 21 04:31:46 crc kubenswrapper[4923]: I0321 04:31:46.965441 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6" Mar 21 04:31:49 crc kubenswrapper[4923]: I0321 04:31:49.578361 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fmp5j" Mar 21 04:31:49 crc kubenswrapper[4923]: I0321 04:31:49.578621 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fmp5j" Mar 21 04:31:49 crc kubenswrapper[4923]: I0321 04:31:49.620946 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fmp5j" Mar 21 04:31:49 crc kubenswrapper[4923]: I0321 04:31:49.687031 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6868c4d546-xvgzg"] Mar 21 04:31:49 crc kubenswrapper[4923]: E0321 04:31:49.687237 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="210b685d-8b03-4ccd-9b23-c0231434354c" containerName="extract" Mar 21 04:31:49 crc kubenswrapper[4923]: I0321 04:31:49.687250 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="210b685d-8b03-4ccd-9b23-c0231434354c" containerName="extract" Mar 21 04:31:49 crc kubenswrapper[4923]: E0321 04:31:49.687266 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="210b685d-8b03-4ccd-9b23-c0231434354c" containerName="util" Mar 21 04:31:49 crc kubenswrapper[4923]: I0321 04:31:49.687272 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="210b685d-8b03-4ccd-9b23-c0231434354c" containerName="util" Mar 21 04:31:49 crc kubenswrapper[4923]: E0321 04:31:49.687286 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="210b685d-8b03-4ccd-9b23-c0231434354c" containerName="pull" Mar 21 04:31:49 crc kubenswrapper[4923]: I0321 04:31:49.687292 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="210b685d-8b03-4ccd-9b23-c0231434354c" containerName="pull" Mar 21 04:31:49 crc kubenswrapper[4923]: I0321 04:31:49.687450 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="210b685d-8b03-4ccd-9b23-c0231434354c" containerName="extract" Mar 21 04:31:49 crc kubenswrapper[4923]: I0321 04:31:49.687831 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6868c4d546-xvgzg" Mar 21 04:31:49 crc kubenswrapper[4923]: I0321 04:31:49.689812 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 21 04:31:49 crc kubenswrapper[4923]: I0321 04:31:49.690068 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-service-cert" Mar 21 04:31:49 crc kubenswrapper[4923]: I0321 04:31:49.690554 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-6hszb" Mar 21 04:31:49 crc kubenswrapper[4923]: I0321 04:31:49.703767 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6868c4d546-xvgzg"] Mar 21 04:31:49 crc kubenswrapper[4923]: I0321 04:31:49.798472 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wgmb\" (UniqueName: \"kubernetes.io/projected/892d8b77-92f8-489f-854d-fcbb4ce80dae-kube-api-access-4wgmb\") pod \"mariadb-operator-controller-manager-6868c4d546-xvgzg\" (UID: \"892d8b77-92f8-489f-854d-fcbb4ce80dae\") " pod="openstack-operators/mariadb-operator-controller-manager-6868c4d546-xvgzg" Mar 21 04:31:49 crc kubenswrapper[4923]: I0321 04:31:49.798519 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/892d8b77-92f8-489f-854d-fcbb4ce80dae-webhook-cert\") pod \"mariadb-operator-controller-manager-6868c4d546-xvgzg\" (UID: \"892d8b77-92f8-489f-854d-fcbb4ce80dae\") " pod="openstack-operators/mariadb-operator-controller-manager-6868c4d546-xvgzg" Mar 21 04:31:49 crc kubenswrapper[4923]: I0321 04:31:49.798612 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/892d8b77-92f8-489f-854d-fcbb4ce80dae-apiservice-cert\") pod \"mariadb-operator-controller-manager-6868c4d546-xvgzg\" (UID: \"892d8b77-92f8-489f-854d-fcbb4ce80dae\") " pod="openstack-operators/mariadb-operator-controller-manager-6868c4d546-xvgzg" Mar 21 04:31:49 crc kubenswrapper[4923]: I0321 04:31:49.900178 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/892d8b77-92f8-489f-854d-fcbb4ce80dae-apiservice-cert\") pod \"mariadb-operator-controller-manager-6868c4d546-xvgzg\" (UID: \"892d8b77-92f8-489f-854d-fcbb4ce80dae\") " pod="openstack-operators/mariadb-operator-controller-manager-6868c4d546-xvgzg" Mar 21 04:31:49 crc kubenswrapper[4923]: I0321 04:31:49.900483 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wgmb\" (UniqueName: \"kubernetes.io/projected/892d8b77-92f8-489f-854d-fcbb4ce80dae-kube-api-access-4wgmb\") pod \"mariadb-operator-controller-manager-6868c4d546-xvgzg\" (UID: \"892d8b77-92f8-489f-854d-fcbb4ce80dae\") " pod="openstack-operators/mariadb-operator-controller-manager-6868c4d546-xvgzg" Mar 21 04:31:49 crc kubenswrapper[4923]: I0321 04:31:49.900508 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/892d8b77-92f8-489f-854d-fcbb4ce80dae-webhook-cert\") pod \"mariadb-operator-controller-manager-6868c4d546-xvgzg\" (UID: \"892d8b77-92f8-489f-854d-fcbb4ce80dae\") " pod="openstack-operators/mariadb-operator-controller-manager-6868c4d546-xvgzg" Mar 21 04:31:49 crc kubenswrapper[4923]: I0321 04:31:49.908564 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/892d8b77-92f8-489f-854d-fcbb4ce80dae-apiservice-cert\") pod \"mariadb-operator-controller-manager-6868c4d546-xvgzg\" (UID: \"892d8b77-92f8-489f-854d-fcbb4ce80dae\") " pod="openstack-operators/mariadb-operator-controller-manager-6868c4d546-xvgzg" Mar 21 04:31:49 crc kubenswrapper[4923]: I0321 04:31:49.909267 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/892d8b77-92f8-489f-854d-fcbb4ce80dae-webhook-cert\") pod \"mariadb-operator-controller-manager-6868c4d546-xvgzg\" (UID: \"892d8b77-92f8-489f-854d-fcbb4ce80dae\") " pod="openstack-operators/mariadb-operator-controller-manager-6868c4d546-xvgzg" Mar 21 04:31:49 crc kubenswrapper[4923]: I0321 04:31:49.919288 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wgmb\" (UniqueName: \"kubernetes.io/projected/892d8b77-92f8-489f-854d-fcbb4ce80dae-kube-api-access-4wgmb\") pod \"mariadb-operator-controller-manager-6868c4d546-xvgzg\" (UID: \"892d8b77-92f8-489f-854d-fcbb4ce80dae\") " pod="openstack-operators/mariadb-operator-controller-manager-6868c4d546-xvgzg" Mar 21 04:31:50 crc kubenswrapper[4923]: I0321 04:31:50.013201 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6868c4d546-xvgzg" Mar 21 04:31:50 crc kubenswrapper[4923]: I0321 04:31:50.041042 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fmp5j" Mar 21 04:31:50 crc kubenswrapper[4923]: I0321 04:31:50.451835 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6868c4d546-xvgzg"] Mar 21 04:31:50 crc kubenswrapper[4923]: W0321 04:31:50.460154 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod892d8b77_92f8_489f_854d_fcbb4ce80dae.slice/crio-03695cb67f695896af69bf694f768dd787c49f2ff46af656f089bbf6e86b01fd WatchSource:0}: Error finding container 03695cb67f695896af69bf694f768dd787c49f2ff46af656f089bbf6e86b01fd: Status 404 returned error can't find the container with id 03695cb67f695896af69bf694f768dd787c49f2ff46af656f089bbf6e86b01fd Mar 21 04:31:50 crc kubenswrapper[4923]: I0321 04:31:50.995494 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6868c4d546-xvgzg" event={"ID":"892d8b77-92f8-489f-854d-fcbb4ce80dae","Type":"ContainerStarted","Data":"03695cb67f695896af69bf694f768dd787c49f2ff46af656f089bbf6e86b01fd"} Mar 21 04:31:52 crc kubenswrapper[4923]: I0321 04:31:52.025528 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fmp5j"] Mar 21 04:31:52 crc kubenswrapper[4923]: I0321 04:31:52.026103 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fmp5j" podUID="d94c71a6-f46e-4da4-90bf-cccd74f3d13e" containerName="registry-server" containerID="cri-o://b9f91bb2bc10074e220d338487d6d3c7a8cf1b67dfcf24c533cee560a44dd776" gracePeriod=2 Mar 21 04:31:53 crc kubenswrapper[4923]: I0321 04:31:53.015162 4923 generic.go:334] "Generic (PLEG): container finished" podID="d94c71a6-f46e-4da4-90bf-cccd74f3d13e" containerID="b9f91bb2bc10074e220d338487d6d3c7a8cf1b67dfcf24c533cee560a44dd776" exitCode=0 Mar 21 04:31:53 crc kubenswrapper[4923]: I0321 04:31:53.015211 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmp5j" event={"ID":"d94c71a6-f46e-4da4-90bf-cccd74f3d13e","Type":"ContainerDied","Data":"b9f91bb2bc10074e220d338487d6d3c7a8cf1b67dfcf24c533cee560a44dd776"} Mar 21 04:31:54 crc kubenswrapper[4923]: I0321 04:31:54.710027 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fmp5j" Mar 21 04:31:54 crc kubenswrapper[4923]: I0321 04:31:54.774101 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d94c71a6-f46e-4da4-90bf-cccd74f3d13e-utilities\") pod \"d94c71a6-f46e-4da4-90bf-cccd74f3d13e\" (UID: \"d94c71a6-f46e-4da4-90bf-cccd74f3d13e\") " Mar 21 04:31:54 crc kubenswrapper[4923]: I0321 04:31:54.774153 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d94c71a6-f46e-4da4-90bf-cccd74f3d13e-catalog-content\") pod \"d94c71a6-f46e-4da4-90bf-cccd74f3d13e\" (UID: \"d94c71a6-f46e-4da4-90bf-cccd74f3d13e\") " Mar 21 04:31:54 crc kubenswrapper[4923]: I0321 04:31:54.774227 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7sl2\" (UniqueName: \"kubernetes.io/projected/d94c71a6-f46e-4da4-90bf-cccd74f3d13e-kube-api-access-z7sl2\") pod \"d94c71a6-f46e-4da4-90bf-cccd74f3d13e\" (UID: \"d94c71a6-f46e-4da4-90bf-cccd74f3d13e\") " Mar 21 04:31:54 crc kubenswrapper[4923]: I0321 04:31:54.775384 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d94c71a6-f46e-4da4-90bf-cccd74f3d13e-utilities" (OuterVolumeSpecName: "utilities") pod "d94c71a6-f46e-4da4-90bf-cccd74f3d13e" (UID: "d94c71a6-f46e-4da4-90bf-cccd74f3d13e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:31:54 crc kubenswrapper[4923]: I0321 04:31:54.781858 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d94c71a6-f46e-4da4-90bf-cccd74f3d13e-kube-api-access-z7sl2" (OuterVolumeSpecName: "kube-api-access-z7sl2") pod "d94c71a6-f46e-4da4-90bf-cccd74f3d13e" (UID: "d94c71a6-f46e-4da4-90bf-cccd74f3d13e"). InnerVolumeSpecName "kube-api-access-z7sl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:31:54 crc kubenswrapper[4923]: I0321 04:31:54.803922 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d94c71a6-f46e-4da4-90bf-cccd74f3d13e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d94c71a6-f46e-4da4-90bf-cccd74f3d13e" (UID: "d94c71a6-f46e-4da4-90bf-cccd74f3d13e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:31:54 crc kubenswrapper[4923]: I0321 04:31:54.875312 4923 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d94c71a6-f46e-4da4-90bf-cccd74f3d13e-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 04:31:54 crc kubenswrapper[4923]: I0321 04:31:54.875583 4923 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d94c71a6-f46e-4da4-90bf-cccd74f3d13e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 04:31:54 crc kubenswrapper[4923]: I0321 04:31:54.875593 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7sl2\" (UniqueName: \"kubernetes.io/projected/d94c71a6-f46e-4da4-90bf-cccd74f3d13e-kube-api-access-z7sl2\") on node \"crc\" DevicePath \"\"" Mar 21 04:31:55 crc kubenswrapper[4923]: I0321 04:31:55.029711 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6868c4d546-xvgzg" event={"ID":"892d8b77-92f8-489f-854d-fcbb4ce80dae","Type":"ContainerStarted","Data":"802a4ef752e5e78dc3f5fe8af54cb84e0e873b22089bd982f50b7fcce30b1bf5"} Mar 21 04:31:55 crc kubenswrapper[4923]: I0321 04:31:55.029811 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6868c4d546-xvgzg" Mar 21 04:31:55 crc kubenswrapper[4923]: I0321 04:31:55.032175 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmp5j" event={"ID":"d94c71a6-f46e-4da4-90bf-cccd74f3d13e","Type":"ContainerDied","Data":"6542acad1814433fa4243e427945bf7d328bc0aa37dadf235ab90693d2b6a02d"} Mar 21 04:31:55 crc kubenswrapper[4923]: I0321 04:31:55.032215 4923 scope.go:117] "RemoveContainer" containerID="b9f91bb2bc10074e220d338487d6d3c7a8cf1b67dfcf24c533cee560a44dd776" Mar 21 04:31:55 crc kubenswrapper[4923]: I0321 04:31:55.032383 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fmp5j" Mar 21 04:31:55 crc kubenswrapper[4923]: I0321 04:31:55.047263 4923 scope.go:117] "RemoveContainer" containerID="939edf78d8836c3da73612d0c96639f3da2a7b868f5c74e6cebbcec2f3235878" Mar 21 04:31:55 crc kubenswrapper[4923]: I0321 04:31:55.060673 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6868c4d546-xvgzg" podStartSLOduration=1.815602683 podStartE2EDuration="6.060647193s" podCreationTimestamp="2026-03-21 04:31:49 +0000 UTC" firstStartedPulling="2026-03-21 04:31:50.463141167 +0000 UTC m=+875.616152294" lastFinishedPulling="2026-03-21 04:31:54.708185717 +0000 UTC m=+879.861196804" observedRunningTime="2026-03-21 04:31:55.052383974 +0000 UTC m=+880.205395091" watchObservedRunningTime="2026-03-21 04:31:55.060647193 +0000 UTC m=+880.213658290" Mar 21 04:31:55 crc kubenswrapper[4923]: I0321 04:31:55.075596 4923 scope.go:117] "RemoveContainer" containerID="6bc7675be32b2146f04b1b83e6a96b8ac4bf3e97a0c0cd6ac2e82bd74f5e4b5d" Mar 21 04:31:55 crc kubenswrapper[4923]: I0321 04:31:55.084612 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fmp5j"] Mar 21 04:31:55 crc kubenswrapper[4923]: I0321 04:31:55.085861 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fmp5j"] Mar 21 04:31:56 crc kubenswrapper[4923]: I0321 04:31:56.366615 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d94c71a6-f46e-4da4-90bf-cccd74f3d13e" path="/var/lib/kubelet/pods/d94c71a6-f46e-4da4-90bf-cccd74f3d13e/volumes" Mar 21 04:32:00 crc kubenswrapper[4923]: I0321 04:32:00.018120 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6868c4d546-xvgzg" Mar 21 04:32:00 crc kubenswrapper[4923]: I0321 04:32:00.121748 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567792-tpmsl"] Mar 21 04:32:00 crc kubenswrapper[4923]: E0321 04:32:00.122404 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d94c71a6-f46e-4da4-90bf-cccd74f3d13e" containerName="extract-utilities" Mar 21 04:32:00 crc kubenswrapper[4923]: I0321 04:32:00.122420 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="d94c71a6-f46e-4da4-90bf-cccd74f3d13e" containerName="extract-utilities" Mar 21 04:32:00 crc kubenswrapper[4923]: E0321 04:32:00.122441 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d94c71a6-f46e-4da4-90bf-cccd74f3d13e" containerName="registry-server" Mar 21 04:32:00 crc kubenswrapper[4923]: I0321 04:32:00.122449 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="d94c71a6-f46e-4da4-90bf-cccd74f3d13e" containerName="registry-server" Mar 21 04:32:00 crc kubenswrapper[4923]: E0321 04:32:00.122462 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d94c71a6-f46e-4da4-90bf-cccd74f3d13e" containerName="extract-content" Mar 21 04:32:00 crc kubenswrapper[4923]: I0321 04:32:00.122470 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="d94c71a6-f46e-4da4-90bf-cccd74f3d13e" containerName="extract-content" Mar 21 04:32:00 crc kubenswrapper[4923]: I0321 04:32:00.122608 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="d94c71a6-f46e-4da4-90bf-cccd74f3d13e" containerName="registry-server" Mar 21 04:32:00 crc kubenswrapper[4923]: I0321 04:32:00.123053 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567792-tpmsl" Mar 21 04:32:00 crc kubenswrapper[4923]: I0321 04:32:00.126690 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 04:32:00 crc kubenswrapper[4923]: I0321 04:32:00.126961 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8trfl" Mar 21 04:32:00 crc kubenswrapper[4923]: I0321 04:32:00.132627 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 04:32:00 crc kubenswrapper[4923]: I0321 04:32:00.138260 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567792-tpmsl"] Mar 21 04:32:00 crc kubenswrapper[4923]: I0321 04:32:00.265692 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcmqz\" (UniqueName: \"kubernetes.io/projected/34f8c576-1fbc-4288-ad35-efb0873ff5cb-kube-api-access-pcmqz\") pod \"auto-csr-approver-29567792-tpmsl\" (UID: \"34f8c576-1fbc-4288-ad35-efb0873ff5cb\") " pod="openshift-infra/auto-csr-approver-29567792-tpmsl" Mar 21 04:32:00 crc kubenswrapper[4923]: I0321 04:32:00.366426 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcmqz\" (UniqueName: \"kubernetes.io/projected/34f8c576-1fbc-4288-ad35-efb0873ff5cb-kube-api-access-pcmqz\") pod \"auto-csr-approver-29567792-tpmsl\" (UID: \"34f8c576-1fbc-4288-ad35-efb0873ff5cb\") " pod="openshift-infra/auto-csr-approver-29567792-tpmsl" Mar 21 04:32:00 crc kubenswrapper[4923]: I0321 04:32:00.398192 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcmqz\" (UniqueName: \"kubernetes.io/projected/34f8c576-1fbc-4288-ad35-efb0873ff5cb-kube-api-access-pcmqz\") pod \"auto-csr-approver-29567792-tpmsl\" (UID: \"34f8c576-1fbc-4288-ad35-efb0873ff5cb\") " pod="openshift-infra/auto-csr-approver-29567792-tpmsl" Mar 21 04:32:00 crc kubenswrapper[4923]: I0321 04:32:00.440225 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567792-tpmsl" Mar 21 04:32:00 crc kubenswrapper[4923]: I0321 04:32:00.703039 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567792-tpmsl"] Mar 21 04:32:01 crc kubenswrapper[4923]: I0321 04:32:01.082333 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567792-tpmsl" event={"ID":"34f8c576-1fbc-4288-ad35-efb0873ff5cb","Type":"ContainerStarted","Data":"1eb84aa77c4f8a0893405fab100d8d1ec73bff115ce1bb92ee67511adb1889da"} Mar 21 04:32:03 crc kubenswrapper[4923]: I0321 04:32:03.096962 4923 generic.go:334] "Generic (PLEG): container finished" podID="34f8c576-1fbc-4288-ad35-efb0873ff5cb" containerID="c43d7984a1b80a0004c4cbb4cb90c1b6acd2e3edb49fc586c1c04157c4374f15" exitCode=0 Mar 21 04:32:03 crc kubenswrapper[4923]: I0321 04:32:03.097017 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567792-tpmsl" event={"ID":"34f8c576-1fbc-4288-ad35-efb0873ff5cb","Type":"ContainerDied","Data":"c43d7984a1b80a0004c4cbb4cb90c1b6acd2e3edb49fc586c1c04157c4374f15"} Mar 21 04:32:04 crc kubenswrapper[4923]: I0321 04:32:04.448476 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567792-tpmsl" Mar 21 04:32:04 crc kubenswrapper[4923]: I0321 04:32:04.519673 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcmqz\" (UniqueName: \"kubernetes.io/projected/34f8c576-1fbc-4288-ad35-efb0873ff5cb-kube-api-access-pcmqz\") pod \"34f8c576-1fbc-4288-ad35-efb0873ff5cb\" (UID: \"34f8c576-1fbc-4288-ad35-efb0873ff5cb\") " Mar 21 04:32:04 crc kubenswrapper[4923]: I0321 04:32:04.525126 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34f8c576-1fbc-4288-ad35-efb0873ff5cb-kube-api-access-pcmqz" (OuterVolumeSpecName: "kube-api-access-pcmqz") pod "34f8c576-1fbc-4288-ad35-efb0873ff5cb" (UID: "34f8c576-1fbc-4288-ad35-efb0873ff5cb"). InnerVolumeSpecName "kube-api-access-pcmqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:32:04 crc kubenswrapper[4923]: I0321 04:32:04.621799 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcmqz\" (UniqueName: \"kubernetes.io/projected/34f8c576-1fbc-4288-ad35-efb0873ff5cb-kube-api-access-pcmqz\") on node \"crc\" DevicePath \"\"" Mar 21 04:32:05 crc kubenswrapper[4923]: I0321 04:32:05.121962 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567792-tpmsl" event={"ID":"34f8c576-1fbc-4288-ad35-efb0873ff5cb","Type":"ContainerDied","Data":"1eb84aa77c4f8a0893405fab100d8d1ec73bff115ce1bb92ee67511adb1889da"} Mar 21 04:32:05 crc kubenswrapper[4923]: I0321 04:32:05.122013 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1eb84aa77c4f8a0893405fab100d8d1ec73bff115ce1bb92ee67511adb1889da" Mar 21 04:32:05 crc kubenswrapper[4923]: I0321 04:32:05.122025 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567792-tpmsl" Mar 21 04:32:05 crc kubenswrapper[4923]: I0321 04:32:05.494516 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567786-dskdm"] Mar 21 04:32:05 crc kubenswrapper[4923]: I0321 04:32:05.499028 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567786-dskdm"] Mar 21 04:32:06 crc kubenswrapper[4923]: I0321 04:32:06.371052 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72d36ca2-41f9-43a2-a4f2-92eb9c9d947f" path="/var/lib/kubelet/pods/72d36ca2-41f9-43a2-a4f2-92eb9c9d947f/volumes" Mar 21 04:32:06 crc kubenswrapper[4923]: I0321 04:32:06.632676 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-cbd92"] Mar 21 04:32:06 crc kubenswrapper[4923]: E0321 04:32:06.633052 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34f8c576-1fbc-4288-ad35-efb0873ff5cb" containerName="oc" Mar 21 04:32:06 crc kubenswrapper[4923]: I0321 04:32:06.633075 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="34f8c576-1fbc-4288-ad35-efb0873ff5cb" containerName="oc" Mar 21 04:32:06 crc kubenswrapper[4923]: I0321 04:32:06.633276 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="34f8c576-1fbc-4288-ad35-efb0873ff5cb" containerName="oc" Mar 21 04:32:06 crc kubenswrapper[4923]: I0321 04:32:06.634144 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-cbd92" Mar 21 04:32:06 crc kubenswrapper[4923]: I0321 04:32:06.636905 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-index-dockercfg-xxqgd" Mar 21 04:32:06 crc kubenswrapper[4923]: I0321 04:32:06.643614 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-cbd92"] Mar 21 04:32:06 crc kubenswrapper[4923]: I0321 04:32:06.749812 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb427\" (UniqueName: \"kubernetes.io/projected/952f08d2-49f3-45e4-b94f-451415bfda88-kube-api-access-cb427\") pod \"infra-operator-index-cbd92\" (UID: \"952f08d2-49f3-45e4-b94f-451415bfda88\") " pod="openstack-operators/infra-operator-index-cbd92" Mar 21 04:32:06 crc kubenswrapper[4923]: I0321 04:32:06.850595 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb427\" (UniqueName: \"kubernetes.io/projected/952f08d2-49f3-45e4-b94f-451415bfda88-kube-api-access-cb427\") pod \"infra-operator-index-cbd92\" (UID: \"952f08d2-49f3-45e4-b94f-451415bfda88\") " pod="openstack-operators/infra-operator-index-cbd92" Mar 21 04:32:06 crc kubenswrapper[4923]: I0321 04:32:06.868551 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb427\" (UniqueName: \"kubernetes.io/projected/952f08d2-49f3-45e4-b94f-451415bfda88-kube-api-access-cb427\") pod \"infra-operator-index-cbd92\" (UID: \"952f08d2-49f3-45e4-b94f-451415bfda88\") " pod="openstack-operators/infra-operator-index-cbd92" Mar 21 04:32:06 crc kubenswrapper[4923]: I0321 04:32:06.960383 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-cbd92" Mar 21 04:32:07 crc kubenswrapper[4923]: I0321 04:32:07.230184 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-cbd92"] Mar 21 04:32:08 crc kubenswrapper[4923]: I0321 04:32:08.150749 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-cbd92" event={"ID":"952f08d2-49f3-45e4-b94f-451415bfda88","Type":"ContainerStarted","Data":"bd6a58e030a0e0fa54c337042ef5fd616e56b79182f4102095a6c6f72d2976a1"} Mar 21 04:32:09 crc kubenswrapper[4923]: I0321 04:32:09.161739 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-cbd92" event={"ID":"952f08d2-49f3-45e4-b94f-451415bfda88","Type":"ContainerStarted","Data":"a0514d694adc6cb3ac0d8ff8d0df3188b522c06241115dbfe2db82935b1de39d"} Mar 21 04:32:09 crc kubenswrapper[4923]: I0321 04:32:09.190989 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-cbd92" podStartSLOduration=2.216101467 podStartE2EDuration="3.190961065s" podCreationTimestamp="2026-03-21 04:32:06 +0000 UTC" firstStartedPulling="2026-03-21 04:32:07.238734486 +0000 UTC m=+892.391745583" lastFinishedPulling="2026-03-21 04:32:08.213594044 +0000 UTC m=+893.366605181" observedRunningTime="2026-03-21 04:32:09.187258018 +0000 UTC m=+894.340269155" watchObservedRunningTime="2026-03-21 04:32:09.190961065 +0000 UTC m=+894.343972192" Mar 21 04:32:10 crc kubenswrapper[4923]: I0321 04:32:10.820560 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-cbd92"] Mar 21 04:32:11 crc kubenswrapper[4923]: I0321 04:32:11.177473 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-index-cbd92" podUID="952f08d2-49f3-45e4-b94f-451415bfda88" containerName="registry-server" containerID="cri-o://a0514d694adc6cb3ac0d8ff8d0df3188b522c06241115dbfe2db82935b1de39d" gracePeriod=2 Mar 21 04:32:11 crc kubenswrapper[4923]: I0321 04:32:11.457614 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-hkph2"] Mar 21 04:32:11 crc kubenswrapper[4923]: I0321 04:32:11.464996 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-hkph2" Mar 21 04:32:11 crc kubenswrapper[4923]: I0321 04:32:11.475341 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-hkph2"] Mar 21 04:32:11 crc kubenswrapper[4923]: I0321 04:32:11.567812 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks98q\" (UniqueName: \"kubernetes.io/projected/85d3f09f-876d-45c0-8631-5df235d5429e-kube-api-access-ks98q\") pod \"infra-operator-index-hkph2\" (UID: \"85d3f09f-876d-45c0-8631-5df235d5429e\") " pod="openstack-operators/infra-operator-index-hkph2" Mar 21 04:32:11 crc kubenswrapper[4923]: I0321 04:32:11.593720 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-cbd92" Mar 21 04:32:11 crc kubenswrapper[4923]: I0321 04:32:11.668980 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks98q\" (UniqueName: \"kubernetes.io/projected/85d3f09f-876d-45c0-8631-5df235d5429e-kube-api-access-ks98q\") pod \"infra-operator-index-hkph2\" (UID: \"85d3f09f-876d-45c0-8631-5df235d5429e\") " pod="openstack-operators/infra-operator-index-hkph2" Mar 21 04:32:11 crc kubenswrapper[4923]: I0321 04:32:11.691907 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks98q\" (UniqueName: \"kubernetes.io/projected/85d3f09f-876d-45c0-8631-5df235d5429e-kube-api-access-ks98q\") pod \"infra-operator-index-hkph2\" (UID: \"85d3f09f-876d-45c0-8631-5df235d5429e\") " pod="openstack-operators/infra-operator-index-hkph2" Mar 21 04:32:11 crc kubenswrapper[4923]: I0321 04:32:11.769603 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb427\" (UniqueName: \"kubernetes.io/projected/952f08d2-49f3-45e4-b94f-451415bfda88-kube-api-access-cb427\") pod \"952f08d2-49f3-45e4-b94f-451415bfda88\" (UID: \"952f08d2-49f3-45e4-b94f-451415bfda88\") " Mar 21 04:32:11 crc kubenswrapper[4923]: I0321 04:32:11.772681 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/952f08d2-49f3-45e4-b94f-451415bfda88-kube-api-access-cb427" (OuterVolumeSpecName: "kube-api-access-cb427") pod "952f08d2-49f3-45e4-b94f-451415bfda88" (UID: "952f08d2-49f3-45e4-b94f-451415bfda88"). InnerVolumeSpecName "kube-api-access-cb427". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:32:11 crc kubenswrapper[4923]: I0321 04:32:11.811399 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-hkph2" Mar 21 04:32:11 crc kubenswrapper[4923]: I0321 04:32:11.871600 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb427\" (UniqueName: \"kubernetes.io/projected/952f08d2-49f3-45e4-b94f-451415bfda88-kube-api-access-cb427\") on node \"crc\" DevicePath \"\"" Mar 21 04:32:12 crc kubenswrapper[4923]: I0321 04:32:12.052039 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-hkph2"] Mar 21 04:32:12 crc kubenswrapper[4923]: W0321 04:32:12.056219 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85d3f09f_876d_45c0_8631_5df235d5429e.slice/crio-7bc31143cabf1b90f2f7f8b92d379751f874f16f6a3f65031de978006b84cadd WatchSource:0}: Error finding container 7bc31143cabf1b90f2f7f8b92d379751f874f16f6a3f65031de978006b84cadd: Status 404 returned error can't find the container with id 7bc31143cabf1b90f2f7f8b92d379751f874f16f6a3f65031de978006b84cadd Mar 21 04:32:12 crc kubenswrapper[4923]: I0321 04:32:12.185915 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-hkph2" event={"ID":"85d3f09f-876d-45c0-8631-5df235d5429e","Type":"ContainerStarted","Data":"7bc31143cabf1b90f2f7f8b92d379751f874f16f6a3f65031de978006b84cadd"} Mar 21 04:32:12 crc kubenswrapper[4923]: I0321 04:32:12.187698 4923 generic.go:334] "Generic (PLEG): container finished" podID="952f08d2-49f3-45e4-b94f-451415bfda88" containerID="a0514d694adc6cb3ac0d8ff8d0df3188b522c06241115dbfe2db82935b1de39d" exitCode=0 Mar 21 04:32:12 crc kubenswrapper[4923]: I0321 04:32:12.188521 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-cbd92" Mar 21 04:32:12 crc kubenswrapper[4923]: I0321 04:32:12.188610 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-cbd92" event={"ID":"952f08d2-49f3-45e4-b94f-451415bfda88","Type":"ContainerDied","Data":"a0514d694adc6cb3ac0d8ff8d0df3188b522c06241115dbfe2db82935b1de39d"} Mar 21 04:32:12 crc kubenswrapper[4923]: I0321 04:32:12.188694 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-cbd92" event={"ID":"952f08d2-49f3-45e4-b94f-451415bfda88","Type":"ContainerDied","Data":"bd6a58e030a0e0fa54c337042ef5fd616e56b79182f4102095a6c6f72d2976a1"} Mar 21 04:32:12 crc kubenswrapper[4923]: I0321 04:32:12.188722 4923 scope.go:117] "RemoveContainer" containerID="a0514d694adc6cb3ac0d8ff8d0df3188b522c06241115dbfe2db82935b1de39d" Mar 21 04:32:12 crc kubenswrapper[4923]: I0321 04:32:12.213924 4923 scope.go:117] "RemoveContainer" containerID="a0514d694adc6cb3ac0d8ff8d0df3188b522c06241115dbfe2db82935b1de39d" Mar 21 04:32:12 crc kubenswrapper[4923]: I0321 04:32:12.216464 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-cbd92"] Mar 21 04:32:12 crc kubenswrapper[4923]: E0321 04:32:12.216784 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0514d694adc6cb3ac0d8ff8d0df3188b522c06241115dbfe2db82935b1de39d\": container with ID starting with a0514d694adc6cb3ac0d8ff8d0df3188b522c06241115dbfe2db82935b1de39d not found: ID does not exist" containerID="a0514d694adc6cb3ac0d8ff8d0df3188b522c06241115dbfe2db82935b1de39d" Mar 21 04:32:12 crc kubenswrapper[4923]: I0321 04:32:12.216844 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0514d694adc6cb3ac0d8ff8d0df3188b522c06241115dbfe2db82935b1de39d"} err="failed to get container status \"a0514d694adc6cb3ac0d8ff8d0df3188b522c06241115dbfe2db82935b1de39d\": rpc error: code = NotFound desc = could not find container \"a0514d694adc6cb3ac0d8ff8d0df3188b522c06241115dbfe2db82935b1de39d\": container with ID starting with a0514d694adc6cb3ac0d8ff8d0df3188b522c06241115dbfe2db82935b1de39d not found: ID does not exist" Mar 21 04:32:12 crc kubenswrapper[4923]: I0321 04:32:12.220214 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-index-cbd92"] Mar 21 04:32:12 crc kubenswrapper[4923]: I0321 04:32:12.364878 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="952f08d2-49f3-45e4-b94f-451415bfda88" path="/var/lib/kubelet/pods/952f08d2-49f3-45e4-b94f-451415bfda88/volumes" Mar 21 04:32:13 crc kubenswrapper[4923]: I0321 04:32:13.197755 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-hkph2" event={"ID":"85d3f09f-876d-45c0-8631-5df235d5429e","Type":"ContainerStarted","Data":"7de8380639db26db34d96d42468580dabbeaa75d84c802317a1f605133b0badb"} Mar 21 04:32:13 crc kubenswrapper[4923]: I0321 04:32:13.219929 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-hkph2" podStartSLOduration=1.750990488 podStartE2EDuration="2.219903471s" podCreationTimestamp="2026-03-21 04:32:11 +0000 UTC" firstStartedPulling="2026-03-21 04:32:12.063300835 +0000 UTC m=+897.216311922" lastFinishedPulling="2026-03-21 04:32:12.532213778 +0000 UTC m=+897.685224905" observedRunningTime="2026-03-21 04:32:13.215868454 +0000 UTC m=+898.368879581" watchObservedRunningTime="2026-03-21 04:32:13.219903471 +0000 UTC m=+898.372914568" Mar 21 04:32:17 crc kubenswrapper[4923]: I0321 04:32:17.246567 4923 scope.go:117] "RemoveContainer" containerID="3ab34c821665e75c8bcea6fcf5ac8e022849ddb87a1a9b51facf2a60a1d35060" Mar 21 04:32:21 crc kubenswrapper[4923]: I0321 04:32:21.811651 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-index-hkph2" Mar 21 04:32:21 crc kubenswrapper[4923]: I0321 04:32:21.812156 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/infra-operator-index-hkph2" Mar 21 04:32:21 crc kubenswrapper[4923]: I0321 04:32:21.868422 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/infra-operator-index-hkph2" Mar 21 04:32:22 crc kubenswrapper[4923]: I0321 04:32:22.313276 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-index-hkph2" Mar 21 04:32:25 crc kubenswrapper[4923]: I0321 04:32:25.691389 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4"] Mar 21 04:32:25 crc kubenswrapper[4923]: E0321 04:32:25.694007 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="952f08d2-49f3-45e4-b94f-451415bfda88" containerName="registry-server" Mar 21 04:32:25 crc kubenswrapper[4923]: I0321 04:32:25.694387 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="952f08d2-49f3-45e4-b94f-451415bfda88" containerName="registry-server" Mar 21 04:32:25 crc kubenswrapper[4923]: I0321 04:32:25.694606 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="952f08d2-49f3-45e4-b94f-451415bfda88" containerName="registry-server" Mar 21 04:32:25 crc kubenswrapper[4923]: I0321 04:32:25.696075 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4" Mar 21 04:32:25 crc kubenswrapper[4923]: I0321 04:32:25.701314 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-vcb7s" Mar 21 04:32:25 crc kubenswrapper[4923]: I0321 04:32:25.704704 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4"] Mar 21 04:32:25 crc kubenswrapper[4923]: I0321 04:32:25.877912 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e365be8a-9b66-42a3-87e4-8d6dcfced627-util\") pod \"014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4\" (UID: \"e365be8a-9b66-42a3-87e4-8d6dcfced627\") " pod="openstack-operators/014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4" Mar 21 04:32:25 crc kubenswrapper[4923]: I0321 04:32:25.878041 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chbbc\" (UniqueName: \"kubernetes.io/projected/e365be8a-9b66-42a3-87e4-8d6dcfced627-kube-api-access-chbbc\") pod \"014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4\" (UID: \"e365be8a-9b66-42a3-87e4-8d6dcfced627\") " pod="openstack-operators/014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4" Mar 21 04:32:25 crc kubenswrapper[4923]: I0321 04:32:25.878072 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e365be8a-9b66-42a3-87e4-8d6dcfced627-bundle\") pod \"014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4\" (UID: \"e365be8a-9b66-42a3-87e4-8d6dcfced627\") " pod="openstack-operators/014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4" Mar 21 04:32:25 crc kubenswrapper[4923]: I0321 04:32:25.982167 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chbbc\" (UniqueName: \"kubernetes.io/projected/e365be8a-9b66-42a3-87e4-8d6dcfced627-kube-api-access-chbbc\") pod \"014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4\" (UID: \"e365be8a-9b66-42a3-87e4-8d6dcfced627\") " pod="openstack-operators/014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4" Mar 21 04:32:25 crc kubenswrapper[4923]: I0321 04:32:25.982626 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e365be8a-9b66-42a3-87e4-8d6dcfced627-bundle\") pod \"014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4\" (UID: \"e365be8a-9b66-42a3-87e4-8d6dcfced627\") " pod="openstack-operators/014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4" Mar 21 04:32:25 crc kubenswrapper[4923]: I0321 04:32:25.982854 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e365be8a-9b66-42a3-87e4-8d6dcfced627-util\") pod \"014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4\" (UID: \"e365be8a-9b66-42a3-87e4-8d6dcfced627\") " pod="openstack-operators/014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4" Mar 21 04:32:25 crc kubenswrapper[4923]: I0321 04:32:25.983614 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e365be8a-9b66-42a3-87e4-8d6dcfced627-bundle\") pod \"014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4\" (UID: \"e365be8a-9b66-42a3-87e4-8d6dcfced627\") " pod="openstack-operators/014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4" Mar 21 04:32:25 crc kubenswrapper[4923]: I0321 04:32:25.983731 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e365be8a-9b66-42a3-87e4-8d6dcfced627-util\") pod \"014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4\" (UID: \"e365be8a-9b66-42a3-87e4-8d6dcfced627\") " pod="openstack-operators/014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4" Mar 21 04:32:26 crc kubenswrapper[4923]: I0321 04:32:26.016509 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chbbc\" (UniqueName: \"kubernetes.io/projected/e365be8a-9b66-42a3-87e4-8d6dcfced627-kube-api-access-chbbc\") pod \"014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4\" (UID: \"e365be8a-9b66-42a3-87e4-8d6dcfced627\") " pod="openstack-operators/014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4" Mar 21 04:32:26 crc kubenswrapper[4923]: I0321 04:32:26.316118 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4" Mar 21 04:32:26 crc kubenswrapper[4923]: I0321 04:32:26.584801 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4"] Mar 21 04:32:26 crc kubenswrapper[4923]: W0321 04:32:26.591257 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode365be8a_9b66_42a3_87e4_8d6dcfced627.slice/crio-8bd2dbfd5535ee3461a73818d7b6338699f4857527548a64f427a172e4d2f309 WatchSource:0}: Error finding container 8bd2dbfd5535ee3461a73818d7b6338699f4857527548a64f427a172e4d2f309: Status 404 returned error can't find the container with id 8bd2dbfd5535ee3461a73818d7b6338699f4857527548a64f427a172e4d2f309 Mar 21 04:32:27 crc kubenswrapper[4923]: I0321 04:32:27.305532 4923 generic.go:334] "Generic (PLEG): container finished" podID="e365be8a-9b66-42a3-87e4-8d6dcfced627" containerID="98593327340a2458fd4716274cc3c47e84a99ec10498a7df35fe3aeba758f51a" exitCode=0 Mar 21 04:32:27 crc kubenswrapper[4923]: I0321 04:32:27.305900 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4" event={"ID":"e365be8a-9b66-42a3-87e4-8d6dcfced627","Type":"ContainerDied","Data":"98593327340a2458fd4716274cc3c47e84a99ec10498a7df35fe3aeba758f51a"} Mar 21 04:32:27 crc kubenswrapper[4923]: I0321 04:32:27.305948 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4" event={"ID":"e365be8a-9b66-42a3-87e4-8d6dcfced627","Type":"ContainerStarted","Data":"8bd2dbfd5535ee3461a73818d7b6338699f4857527548a64f427a172e4d2f309"} Mar 21 04:32:28 crc kubenswrapper[4923]: I0321 04:32:28.328992 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4" event={"ID":"e365be8a-9b66-42a3-87e4-8d6dcfced627","Type":"ContainerStarted","Data":"cd3ab49393764b1a2f849f5ffff1dec3f075c6433e53262099d09271e67f3edd"} Mar 21 04:32:29 crc kubenswrapper[4923]: I0321 04:32:29.340663 4923 generic.go:334] "Generic (PLEG): container finished" podID="e365be8a-9b66-42a3-87e4-8d6dcfced627" containerID="cd3ab49393764b1a2f849f5ffff1dec3f075c6433e53262099d09271e67f3edd" exitCode=0 Mar 21 04:32:29 crc kubenswrapper[4923]: I0321 04:32:29.340737 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4" event={"ID":"e365be8a-9b66-42a3-87e4-8d6dcfced627","Type":"ContainerDied","Data":"cd3ab49393764b1a2f849f5ffff1dec3f075c6433e53262099d09271e67f3edd"} Mar 21 04:32:30 crc kubenswrapper[4923]: I0321 04:32:30.350255 4923 generic.go:334] "Generic (PLEG): container finished" podID="e365be8a-9b66-42a3-87e4-8d6dcfced627" containerID="d7311cd62fe728871b73d36a083236fa6f7b91b4f53671bce1293a4c0472f64a" exitCode=0 Mar 21 04:32:30 crc kubenswrapper[4923]: I0321 04:32:30.350439 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4" event={"ID":"e365be8a-9b66-42a3-87e4-8d6dcfced627","Type":"ContainerDied","Data":"d7311cd62fe728871b73d36a083236fa6f7b91b4f53671bce1293a4c0472f64a"} Mar 21 04:32:31 crc kubenswrapper[4923]: I0321 04:32:31.776693 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4" Mar 21 04:32:31 crc kubenswrapper[4923]: I0321 04:32:31.972725 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chbbc\" (UniqueName: \"kubernetes.io/projected/e365be8a-9b66-42a3-87e4-8d6dcfced627-kube-api-access-chbbc\") pod \"e365be8a-9b66-42a3-87e4-8d6dcfced627\" (UID: \"e365be8a-9b66-42a3-87e4-8d6dcfced627\") " Mar 21 04:32:31 crc kubenswrapper[4923]: I0321 04:32:31.972809 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e365be8a-9b66-42a3-87e4-8d6dcfced627-bundle\") pod \"e365be8a-9b66-42a3-87e4-8d6dcfced627\" (UID: \"e365be8a-9b66-42a3-87e4-8d6dcfced627\") " Mar 21 04:32:31 crc kubenswrapper[4923]: I0321 04:32:31.972903 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e365be8a-9b66-42a3-87e4-8d6dcfced627-util\") pod \"e365be8a-9b66-42a3-87e4-8d6dcfced627\" (UID: \"e365be8a-9b66-42a3-87e4-8d6dcfced627\") " Mar 21 04:32:31 crc kubenswrapper[4923]: I0321 04:32:31.978208 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e365be8a-9b66-42a3-87e4-8d6dcfced627-bundle" (OuterVolumeSpecName: "bundle") pod "e365be8a-9b66-42a3-87e4-8d6dcfced627" (UID: "e365be8a-9b66-42a3-87e4-8d6dcfced627"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:32:31 crc kubenswrapper[4923]: I0321 04:32:31.982105 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e365be8a-9b66-42a3-87e4-8d6dcfced627-kube-api-access-chbbc" (OuterVolumeSpecName: "kube-api-access-chbbc") pod "e365be8a-9b66-42a3-87e4-8d6dcfced627" (UID: "e365be8a-9b66-42a3-87e4-8d6dcfced627"). InnerVolumeSpecName "kube-api-access-chbbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:32:32 crc kubenswrapper[4923]: I0321 04:32:32.005486 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e365be8a-9b66-42a3-87e4-8d6dcfced627-util" (OuterVolumeSpecName: "util") pod "e365be8a-9b66-42a3-87e4-8d6dcfced627" (UID: "e365be8a-9b66-42a3-87e4-8d6dcfced627"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:32:32 crc kubenswrapper[4923]: I0321 04:32:32.074905 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chbbc\" (UniqueName: \"kubernetes.io/projected/e365be8a-9b66-42a3-87e4-8d6dcfced627-kube-api-access-chbbc\") on node \"crc\" DevicePath \"\"" Mar 21 04:32:32 crc kubenswrapper[4923]: I0321 04:32:32.074955 4923 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e365be8a-9b66-42a3-87e4-8d6dcfced627-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 04:32:32 crc kubenswrapper[4923]: I0321 04:32:32.074977 4923 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e365be8a-9b66-42a3-87e4-8d6dcfced627-util\") on node \"crc\" DevicePath \"\"" Mar 21 04:32:32 crc kubenswrapper[4923]: I0321 04:32:32.378490 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4" Mar 21 04:32:32 crc kubenswrapper[4923]: I0321 04:32:32.381187 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4" event={"ID":"e365be8a-9b66-42a3-87e4-8d6dcfced627","Type":"ContainerDied","Data":"8bd2dbfd5535ee3461a73818d7b6338699f4857527548a64f427a172e4d2f309"} Mar 21 04:32:32 crc kubenswrapper[4923]: I0321 04:32:32.381234 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bd2dbfd5535ee3461a73818d7b6338699f4857527548a64f427a172e4d2f309" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.526110 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/openstack-galera-0"] Mar 21 04:32:36 crc kubenswrapper[4923]: E0321 04:32:36.526795 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e365be8a-9b66-42a3-87e4-8d6dcfced627" containerName="util" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.526819 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="e365be8a-9b66-42a3-87e4-8d6dcfced627" containerName="util" Mar 21 04:32:36 crc kubenswrapper[4923]: E0321 04:32:36.526840 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e365be8a-9b66-42a3-87e4-8d6dcfced627" containerName="pull" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.526851 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="e365be8a-9b66-42a3-87e4-8d6dcfced627" containerName="pull" Mar 21 04:32:36 crc kubenswrapper[4923]: E0321 04:32:36.526890 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e365be8a-9b66-42a3-87e4-8d6dcfced627" containerName="extract" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.526901 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="e365be8a-9b66-42a3-87e4-8d6dcfced627" containerName="extract" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.527075 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="e365be8a-9b66-42a3-87e4-8d6dcfced627" containerName="extract" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.528016 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/openstack-galera-0" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.530908 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"horizon-kuttl-tests"/"openshift-service-ca.crt" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.531377 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"horizon-kuttl-tests"/"openstack-scripts" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.531474 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"horizon-kuttl-tests"/"openstack-config-data" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.531561 4923 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"galera-openstack-dockercfg-p97f5" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.531590 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"horizon-kuttl-tests"/"kube-root-ca.crt" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.543598 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/openstack-galera-0"] Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.550257 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/openstack-galera-2"] Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.551678 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/openstack-galera-2" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.558560 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/openstack-galera-1"] Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.560108 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/openstack-galera-1" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.565251 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/openstack-galera-2"] Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.573104 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/openstack-galera-1"] Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.647758 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\") " pod="horizon-kuttl-tests/openstack-galera-0" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.647812 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\") " pod="horizon-kuttl-tests/openstack-galera-0" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.647860 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-kolla-config\") pod \"openstack-galera-0\" (UID: \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\") " pod="horizon-kuttl-tests/openstack-galera-0" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.647891 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-config-data-default\") pod \"openstack-galera-0\" (UID: \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\") " pod="horizon-kuttl-tests/openstack-galera-0" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.647917 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\") " pod="horizon-kuttl-tests/openstack-galera-0" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.647992 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgtjp\" (UniqueName: \"kubernetes.io/projected/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-kube-api-access-cgtjp\") pod \"openstack-galera-0\" (UID: \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\") " pod="horizon-kuttl-tests/openstack-galera-0" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.749240 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-config-data-generated\") pod \"openstack-galera-1\" (UID: \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\") " pod="horizon-kuttl-tests/openstack-galera-1" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.749777 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-kolla-config\") pod \"openstack-galera-1\" (UID: \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\") " pod="horizon-kuttl-tests/openstack-galera-1" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.749894 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/216ff735-f76d-413a-bff8-8e0dfd4177c2-kolla-config\") pod \"openstack-galera-2\" (UID: \"216ff735-f76d-413a-bff8-8e0dfd4177c2\") " pod="horizon-kuttl-tests/openstack-galera-2" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.750011 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgtjp\" (UniqueName: \"kubernetes.io/projected/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-kube-api-access-cgtjp\") pod \"openstack-galera-0\" (UID: \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\") " pod="horizon-kuttl-tests/openstack-galera-0" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.750133 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/216ff735-f76d-413a-bff8-8e0dfd4177c2-config-data-default\") pod \"openstack-galera-2\" (UID: \"216ff735-f76d-413a-bff8-8e0dfd4177c2\") " pod="horizon-kuttl-tests/openstack-galera-2" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.750248 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\") " pod="horizon-kuttl-tests/openstack-galera-0" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.750373 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\") " pod="horizon-kuttl-tests/openstack-galera-0" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.750481 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-operator-scripts\") pod \"openstack-galera-1\" (UID: \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\") " pod="horizon-kuttl-tests/openstack-galera-1" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.750597 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/216ff735-f76d-413a-bff8-8e0dfd4177c2-config-data-generated\") pod \"openstack-galera-2\" (UID: \"216ff735-f76d-413a-bff8-8e0dfd4177c2\") " pod="horizon-kuttl-tests/openstack-galera-2" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.750697 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/216ff735-f76d-413a-bff8-8e0dfd4177c2-operator-scripts\") pod \"openstack-galera-2\" (UID: \"216ff735-f76d-413a-bff8-8e0dfd4177c2\") " pod="horizon-kuttl-tests/openstack-galera-2" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.750801 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-kolla-config\") pod \"openstack-galera-0\" (UID: \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\") " pod="horizon-kuttl-tests/openstack-galera-0" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.750907 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-config-data-default\") pod \"openstack-galera-0\" (UID: \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\") " pod="horizon-kuttl-tests/openstack-galera-0" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.750840 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\") " pod="horizon-kuttl-tests/openstack-galera-0" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.750724 4923 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\") device mount path \"/mnt/openstack/pv05\"" pod="horizon-kuttl-tests/openstack-galera-0" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.751146 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-2\" (UID: \"216ff735-f76d-413a-bff8-8e0dfd4177c2\") " pod="horizon-kuttl-tests/openstack-galera-2" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.751251 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-config-data-default\") pod \"openstack-galera-1\" (UID: \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\") " pod="horizon-kuttl-tests/openstack-galera-1" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.751387 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\") " pod="horizon-kuttl-tests/openstack-galera-0" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.751505 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjmj6\" (UniqueName: \"kubernetes.io/projected/216ff735-f76d-413a-bff8-8e0dfd4177c2-kube-api-access-gjmj6\") pod \"openstack-galera-2\" (UID: \"216ff735-f76d-413a-bff8-8e0dfd4177c2\") " pod="horizon-kuttl-tests/openstack-galera-2" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.751618 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-1\" (UID: \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\") " pod="horizon-kuttl-tests/openstack-galera-1" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.751732 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99fwg\" (UniqueName: \"kubernetes.io/projected/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-kube-api-access-99fwg\") pod \"openstack-galera-1\" (UID: \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\") " pod="horizon-kuttl-tests/openstack-galera-1" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.751661 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-kolla-config\") pod \"openstack-galera-0\" (UID: \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\") " pod="horizon-kuttl-tests/openstack-galera-0" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.752314 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-config-data-default\") pod \"openstack-galera-0\" (UID: \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\") " pod="horizon-kuttl-tests/openstack-galera-0" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.752835 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\") " pod="horizon-kuttl-tests/openstack-galera-0" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.785181 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgtjp\" (UniqueName: \"kubernetes.io/projected/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-kube-api-access-cgtjp\") pod \"openstack-galera-0\" (UID: \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\") " pod="horizon-kuttl-tests/openstack-galera-0" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.789896 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\") " pod="horizon-kuttl-tests/openstack-galera-0" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.844093 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/openstack-galera-0" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.852426 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/216ff735-f76d-413a-bff8-8e0dfd4177c2-config-data-default\") pod \"openstack-galera-2\" (UID: \"216ff735-f76d-413a-bff8-8e0dfd4177c2\") " pod="horizon-kuttl-tests/openstack-galera-2" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.852480 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-operator-scripts\") pod \"openstack-galera-1\" (UID: \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\") " pod="horizon-kuttl-tests/openstack-galera-1" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.852514 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/216ff735-f76d-413a-bff8-8e0dfd4177c2-config-data-generated\") pod \"openstack-galera-2\" (UID: \"216ff735-f76d-413a-bff8-8e0dfd4177c2\") " pod="horizon-kuttl-tests/openstack-galera-2" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.852536 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/216ff735-f76d-413a-bff8-8e0dfd4177c2-operator-scripts\") pod \"openstack-galera-2\" (UID: \"216ff735-f76d-413a-bff8-8e0dfd4177c2\") " pod="horizon-kuttl-tests/openstack-galera-2" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.852568 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-2\" (UID: \"216ff735-f76d-413a-bff8-8e0dfd4177c2\") " pod="horizon-kuttl-tests/openstack-galera-2" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.852593 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-config-data-default\") pod \"openstack-galera-1\" (UID: \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\") " pod="horizon-kuttl-tests/openstack-galera-1" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.852620 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjmj6\" (UniqueName: \"kubernetes.io/projected/216ff735-f76d-413a-bff8-8e0dfd4177c2-kube-api-access-gjmj6\") pod \"openstack-galera-2\" (UID: \"216ff735-f76d-413a-bff8-8e0dfd4177c2\") " pod="horizon-kuttl-tests/openstack-galera-2" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.852647 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-1\" (UID: \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\") " pod="horizon-kuttl-tests/openstack-galera-1" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.852670 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99fwg\" (UniqueName: \"kubernetes.io/projected/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-kube-api-access-99fwg\") pod \"openstack-galera-1\" (UID: \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\") " pod="horizon-kuttl-tests/openstack-galera-1" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.852709 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-config-data-generated\") pod \"openstack-galera-1\" (UID: \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\") " pod="horizon-kuttl-tests/openstack-galera-1" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.852734 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-kolla-config\") pod \"openstack-galera-1\" (UID: \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\") " pod="horizon-kuttl-tests/openstack-galera-1" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.852754 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/216ff735-f76d-413a-bff8-8e0dfd4177c2-kolla-config\") pod \"openstack-galera-2\" (UID: \"216ff735-f76d-413a-bff8-8e0dfd4177c2\") " pod="horizon-kuttl-tests/openstack-galera-2" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.853552 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/216ff735-f76d-413a-bff8-8e0dfd4177c2-kolla-config\") pod \"openstack-galera-2\" (UID: \"216ff735-f76d-413a-bff8-8e0dfd4177c2\") " pod="horizon-kuttl-tests/openstack-galera-2" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.854142 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/216ff735-f76d-413a-bff8-8e0dfd4177c2-config-data-default\") pod \"openstack-galera-2\" (UID: \"216ff735-f76d-413a-bff8-8e0dfd4177c2\") " pod="horizon-kuttl-tests/openstack-galera-2" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.855681 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-operator-scripts\") pod \"openstack-galera-1\" (UID: \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\") " pod="horizon-kuttl-tests/openstack-galera-1" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.855935 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/216ff735-f76d-413a-bff8-8e0dfd4177c2-config-data-generated\") pod \"openstack-galera-2\" (UID: \"216ff735-f76d-413a-bff8-8e0dfd4177c2\") " pod="horizon-kuttl-tests/openstack-galera-2" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.857074 4923 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-2\" (UID: \"216ff735-f76d-413a-bff8-8e0dfd4177c2\") device mount path \"/mnt/openstack/pv07\"" pod="horizon-kuttl-tests/openstack-galera-2" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.857616 4923 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-1\" (UID: \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\") device mount path \"/mnt/openstack/pv01\"" pod="horizon-kuttl-tests/openstack-galera-1" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.857749 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-config-data-default\") pod \"openstack-galera-1\" (UID: \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\") " pod="horizon-kuttl-tests/openstack-galera-1" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.858453 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-kolla-config\") pod \"openstack-galera-1\" (UID: \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\") " pod="horizon-kuttl-tests/openstack-galera-1" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.864030 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/216ff735-f76d-413a-bff8-8e0dfd4177c2-operator-scripts\") pod \"openstack-galera-2\" (UID: \"216ff735-f76d-413a-bff8-8e0dfd4177c2\") " pod="horizon-kuttl-tests/openstack-galera-2" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.866102 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-config-data-generated\") pod \"openstack-galera-1\" (UID: \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\") " pod="horizon-kuttl-tests/openstack-galera-1" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.872423 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-2\" (UID: \"216ff735-f76d-413a-bff8-8e0dfd4177c2\") " pod="horizon-kuttl-tests/openstack-galera-2" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.874043 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-1\" (UID: \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\") " pod="horizon-kuttl-tests/openstack-galera-1" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.875003 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99fwg\" (UniqueName: \"kubernetes.io/projected/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-kube-api-access-99fwg\") pod \"openstack-galera-1\" (UID: \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\") " pod="horizon-kuttl-tests/openstack-galera-1" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.878563 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/openstack-galera-1" Mar 21 04:32:36 crc kubenswrapper[4923]: I0321 04:32:36.883968 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjmj6\" (UniqueName: \"kubernetes.io/projected/216ff735-f76d-413a-bff8-8e0dfd4177c2-kube-api-access-gjmj6\") pod \"openstack-galera-2\" (UID: \"216ff735-f76d-413a-bff8-8e0dfd4177c2\") " pod="horizon-kuttl-tests/openstack-galera-2" Mar 21 04:32:37 crc kubenswrapper[4923]: I0321 04:32:37.059880 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/openstack-galera-0"] Mar 21 04:32:37 crc kubenswrapper[4923]: W0321 04:32:37.068391 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ba6d1e7_e4e3_4857_a4b3_5ed2265343c0.slice/crio-45a669c11dadb29131042b02aea11571e75ab346d95888a0f0dbaa864e891131 WatchSource:0}: Error finding container 45a669c11dadb29131042b02aea11571e75ab346d95888a0f0dbaa864e891131: Status 404 returned error can't find the container with id 45a669c11dadb29131042b02aea11571e75ab346d95888a0f0dbaa864e891131 Mar 21 04:32:37 crc kubenswrapper[4923]: I0321 04:32:37.109304 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/openstack-galera-1"] Mar 21 04:32:37 crc kubenswrapper[4923]: W0321 04:32:37.116476 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f7fa4a2_4b42_4f5d_85d3_3d40e54e6b0d.slice/crio-b1cd6681a0d3dab3f748fd7d0211351745e83efa254960dac801b388a0700637 WatchSource:0}: Error finding container b1cd6681a0d3dab3f748fd7d0211351745e83efa254960dac801b388a0700637: Status 404 returned error can't find the container with id b1cd6681a0d3dab3f748fd7d0211351745e83efa254960dac801b388a0700637 Mar 21 04:32:37 crc kubenswrapper[4923]: I0321 04:32:37.170031 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/openstack-galera-2" Mar 21 04:32:37 crc kubenswrapper[4923]: I0321 04:32:37.390485 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/openstack-galera-2"] Mar 21 04:32:37 crc kubenswrapper[4923]: W0321 04:32:37.403771 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod216ff735_f76d_413a_bff8_8e0dfd4177c2.slice/crio-3eb573dcfc7a53e006b2505306542ebc524eeb2d62caf3aa4c6afa2d8ebe485f WatchSource:0}: Error finding container 3eb573dcfc7a53e006b2505306542ebc524eeb2d62caf3aa4c6afa2d8ebe485f: Status 404 returned error can't find the container with id 3eb573dcfc7a53e006b2505306542ebc524eeb2d62caf3aa4c6afa2d8ebe485f Mar 21 04:32:37 crc kubenswrapper[4923]: I0321 04:32:37.412384 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-1" event={"ID":"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d","Type":"ContainerStarted","Data":"b1cd6681a0d3dab3f748fd7d0211351745e83efa254960dac801b388a0700637"} Mar 21 04:32:37 crc kubenswrapper[4923]: I0321 04:32:37.414247 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-0" event={"ID":"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0","Type":"ContainerStarted","Data":"45a669c11dadb29131042b02aea11571e75ab346d95888a0f0dbaa864e891131"} Mar 21 04:32:38 crc kubenswrapper[4923]: I0321 04:32:38.429415 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-2" event={"ID":"216ff735-f76d-413a-bff8-8e0dfd4177c2","Type":"ContainerStarted","Data":"3eb573dcfc7a53e006b2505306542ebc524eeb2d62caf3aa4c6afa2d8ebe485f"} Mar 21 04:32:41 crc kubenswrapper[4923]: I0321 04:32:41.268544 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f474f7cc-h8gg9"] Mar 21 04:32:41 crc kubenswrapper[4923]: I0321 04:32:41.269565 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5f474f7cc-h8gg9" Mar 21 04:32:41 crc kubenswrapper[4923]: I0321 04:32:41.273421 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-service-cert" Mar 21 04:32:41 crc kubenswrapper[4923]: I0321 04:32:41.273656 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-4b677" Mar 21 04:32:41 crc kubenswrapper[4923]: I0321 04:32:41.285941 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f474f7cc-h8gg9"] Mar 21 04:32:41 crc kubenswrapper[4923]: I0321 04:32:41.423157 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b57d2b83-885a-44b0-b334-f0dd96568ba9-webhook-cert\") pod \"infra-operator-controller-manager-5f474f7cc-h8gg9\" (UID: \"b57d2b83-885a-44b0-b334-f0dd96568ba9\") " pod="openstack-operators/infra-operator-controller-manager-5f474f7cc-h8gg9" Mar 21 04:32:41 crc kubenswrapper[4923]: I0321 04:32:41.423246 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b57d2b83-885a-44b0-b334-f0dd96568ba9-apiservice-cert\") pod \"infra-operator-controller-manager-5f474f7cc-h8gg9\" (UID: \"b57d2b83-885a-44b0-b334-f0dd96568ba9\") " pod="openstack-operators/infra-operator-controller-manager-5f474f7cc-h8gg9" Mar 21 04:32:41 crc kubenswrapper[4923]: I0321 04:32:41.423371 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m94pt\" (UniqueName: \"kubernetes.io/projected/b57d2b83-885a-44b0-b334-f0dd96568ba9-kube-api-access-m94pt\") pod \"infra-operator-controller-manager-5f474f7cc-h8gg9\" (UID: \"b57d2b83-885a-44b0-b334-f0dd96568ba9\") " pod="openstack-operators/infra-operator-controller-manager-5f474f7cc-h8gg9" Mar 21 04:32:41 crc kubenswrapper[4923]: I0321 04:32:41.524723 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b57d2b83-885a-44b0-b334-f0dd96568ba9-webhook-cert\") pod \"infra-operator-controller-manager-5f474f7cc-h8gg9\" (UID: \"b57d2b83-885a-44b0-b334-f0dd96568ba9\") " pod="openstack-operators/infra-operator-controller-manager-5f474f7cc-h8gg9" Mar 21 04:32:41 crc kubenswrapper[4923]: I0321 04:32:41.524860 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b57d2b83-885a-44b0-b334-f0dd96568ba9-apiservice-cert\") pod \"infra-operator-controller-manager-5f474f7cc-h8gg9\" (UID: \"b57d2b83-885a-44b0-b334-f0dd96568ba9\") " pod="openstack-operators/infra-operator-controller-manager-5f474f7cc-h8gg9" Mar 21 04:32:41 crc kubenswrapper[4923]: I0321 04:32:41.524943 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m94pt\" (UniqueName: \"kubernetes.io/projected/b57d2b83-885a-44b0-b334-f0dd96568ba9-kube-api-access-m94pt\") pod \"infra-operator-controller-manager-5f474f7cc-h8gg9\" (UID: \"b57d2b83-885a-44b0-b334-f0dd96568ba9\") " pod="openstack-operators/infra-operator-controller-manager-5f474f7cc-h8gg9" Mar 21 04:32:41 crc kubenswrapper[4923]: I0321 04:32:41.538189 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b57d2b83-885a-44b0-b334-f0dd96568ba9-apiservice-cert\") pod \"infra-operator-controller-manager-5f474f7cc-h8gg9\" (UID: \"b57d2b83-885a-44b0-b334-f0dd96568ba9\") " pod="openstack-operators/infra-operator-controller-manager-5f474f7cc-h8gg9" Mar 21 04:32:41 crc kubenswrapper[4923]: I0321 04:32:41.538236 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b57d2b83-885a-44b0-b334-f0dd96568ba9-webhook-cert\") pod \"infra-operator-controller-manager-5f474f7cc-h8gg9\" (UID: \"b57d2b83-885a-44b0-b334-f0dd96568ba9\") " pod="openstack-operators/infra-operator-controller-manager-5f474f7cc-h8gg9" Mar 21 04:32:41 crc kubenswrapper[4923]: I0321 04:32:41.541502 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m94pt\" (UniqueName: \"kubernetes.io/projected/b57d2b83-885a-44b0-b334-f0dd96568ba9-kube-api-access-m94pt\") pod \"infra-operator-controller-manager-5f474f7cc-h8gg9\" (UID: \"b57d2b83-885a-44b0-b334-f0dd96568ba9\") " pod="openstack-operators/infra-operator-controller-manager-5f474f7cc-h8gg9" Mar 21 04:32:41 crc kubenswrapper[4923]: I0321 04:32:41.585745 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5f474f7cc-h8gg9" Mar 21 04:32:45 crc kubenswrapper[4923]: I0321 04:32:45.486449 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-2" event={"ID":"216ff735-f76d-413a-bff8-8e0dfd4177c2","Type":"ContainerStarted","Data":"6e3a53b71ecefaa546e459f718049404f28b24151fee67a788c66a173476317b"} Mar 21 04:32:45 crc kubenswrapper[4923]: I0321 04:32:45.487768 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-0" event={"ID":"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0","Type":"ContainerStarted","Data":"4396985b47f7912ba2f1c2b7009ac0b7d3d3edfca0883a5a0464ee6fb9e648e0"} Mar 21 04:32:45 crc kubenswrapper[4923]: I0321 04:32:45.490781 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-1" event={"ID":"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d","Type":"ContainerStarted","Data":"70eb13e63c244ffe7b910e005545083669078f0ed021473a043c0d8bf2ab0bbb"} Mar 21 04:32:45 crc kubenswrapper[4923]: I0321 04:32:45.546247 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f474f7cc-h8gg9"] Mar 21 04:32:45 crc kubenswrapper[4923]: W0321 04:32:45.548265 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb57d2b83_885a_44b0_b334_f0dd96568ba9.slice/crio-6a933926bd5d58e2a22f59ece4bf5b15017a647724abed52a98acc47df1e2efd WatchSource:0}: Error finding container 6a933926bd5d58e2a22f59ece4bf5b15017a647724abed52a98acc47df1e2efd: Status 404 returned error can't find the container with id 6a933926bd5d58e2a22f59ece4bf5b15017a647724abed52a98acc47df1e2efd Mar 21 04:32:46 crc kubenswrapper[4923]: I0321 04:32:46.499855 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5f474f7cc-h8gg9" event={"ID":"b57d2b83-885a-44b0-b334-f0dd96568ba9","Type":"ContainerStarted","Data":"6a933926bd5d58e2a22f59ece4bf5b15017a647724abed52a98acc47df1e2efd"} Mar 21 04:32:48 crc kubenswrapper[4923]: I0321 04:32:48.516724 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5f474f7cc-h8gg9" event={"ID":"b57d2b83-885a-44b0-b334-f0dd96568ba9","Type":"ContainerStarted","Data":"956f6af360184b38954fc6aa9485dace8cb7afef54a507668d4de1646b6b7a69"} Mar 21 04:32:48 crc kubenswrapper[4923]: I0321 04:32:48.517174 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-5f474f7cc-h8gg9" Mar 21 04:32:48 crc kubenswrapper[4923]: I0321 04:32:48.547562 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-5f474f7cc-h8gg9" podStartSLOduration=5.520931662 podStartE2EDuration="7.547537694s" podCreationTimestamp="2026-03-21 04:32:41 +0000 UTC" firstStartedPulling="2026-03-21 04:32:45.551541842 +0000 UTC m=+930.704552929" lastFinishedPulling="2026-03-21 04:32:47.578147874 +0000 UTC m=+932.731158961" observedRunningTime="2026-03-21 04:32:48.541272753 +0000 UTC m=+933.694283870" watchObservedRunningTime="2026-03-21 04:32:48.547537694 +0000 UTC m=+933.700548811" Mar 21 04:32:49 crc kubenswrapper[4923]: I0321 04:32:49.523668 4923 generic.go:334] "Generic (PLEG): container finished" podID="0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0" containerID="4396985b47f7912ba2f1c2b7009ac0b7d3d3edfca0883a5a0464ee6fb9e648e0" exitCode=0 Mar 21 04:32:49 crc kubenswrapper[4923]: I0321 04:32:49.523743 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-0" event={"ID":"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0","Type":"ContainerDied","Data":"4396985b47f7912ba2f1c2b7009ac0b7d3d3edfca0883a5a0464ee6fb9e648e0"} Mar 21 04:32:49 crc kubenswrapper[4923]: I0321 04:32:49.525885 4923 generic.go:334] "Generic (PLEG): container finished" podID="7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d" containerID="70eb13e63c244ffe7b910e005545083669078f0ed021473a043c0d8bf2ab0bbb" exitCode=0 Mar 21 04:32:49 crc kubenswrapper[4923]: I0321 04:32:49.525924 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-1" event={"ID":"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d","Type":"ContainerDied","Data":"70eb13e63c244ffe7b910e005545083669078f0ed021473a043c0d8bf2ab0bbb"} Mar 21 04:32:49 crc kubenswrapper[4923]: I0321 04:32:49.530620 4923 generic.go:334] "Generic (PLEG): container finished" podID="216ff735-f76d-413a-bff8-8e0dfd4177c2" containerID="6e3a53b71ecefaa546e459f718049404f28b24151fee67a788c66a173476317b" exitCode=0 Mar 21 04:32:49 crc kubenswrapper[4923]: I0321 04:32:49.530694 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-2" event={"ID":"216ff735-f76d-413a-bff8-8e0dfd4177c2","Type":"ContainerDied","Data":"6e3a53b71ecefaa546e459f718049404f28b24151fee67a788c66a173476317b"} Mar 21 04:32:49 crc kubenswrapper[4923]: I0321 04:32:49.844855 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-45mj4"] Mar 21 04:32:49 crc kubenswrapper[4923]: I0321 04:32:49.846548 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-45mj4" Mar 21 04:32:49 crc kubenswrapper[4923]: I0321 04:32:49.856855 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-45mj4"] Mar 21 04:32:49 crc kubenswrapper[4923]: I0321 04:32:49.948042 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qdb9\" (UniqueName: \"kubernetes.io/projected/4ea44820-b94e-48e8-866b-5ac1b3c07c6e-kube-api-access-4qdb9\") pod \"certified-operators-45mj4\" (UID: \"4ea44820-b94e-48e8-866b-5ac1b3c07c6e\") " pod="openshift-marketplace/certified-operators-45mj4" Mar 21 04:32:49 crc kubenswrapper[4923]: I0321 04:32:49.948106 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ea44820-b94e-48e8-866b-5ac1b3c07c6e-catalog-content\") pod \"certified-operators-45mj4\" (UID: \"4ea44820-b94e-48e8-866b-5ac1b3c07c6e\") " pod="openshift-marketplace/certified-operators-45mj4" Mar 21 04:32:49 crc kubenswrapper[4923]: I0321 04:32:49.948155 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ea44820-b94e-48e8-866b-5ac1b3c07c6e-utilities\") pod \"certified-operators-45mj4\" (UID: \"4ea44820-b94e-48e8-866b-5ac1b3c07c6e\") " pod="openshift-marketplace/certified-operators-45mj4" Mar 21 04:32:50 crc kubenswrapper[4923]: I0321 04:32:50.049076 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qdb9\" (UniqueName: \"kubernetes.io/projected/4ea44820-b94e-48e8-866b-5ac1b3c07c6e-kube-api-access-4qdb9\") pod \"certified-operators-45mj4\" (UID: \"4ea44820-b94e-48e8-866b-5ac1b3c07c6e\") " pod="openshift-marketplace/certified-operators-45mj4" Mar 21 04:32:50 crc kubenswrapper[4923]: I0321 04:32:50.049119 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ea44820-b94e-48e8-866b-5ac1b3c07c6e-catalog-content\") pod \"certified-operators-45mj4\" (UID: \"4ea44820-b94e-48e8-866b-5ac1b3c07c6e\") " pod="openshift-marketplace/certified-operators-45mj4" Mar 21 04:32:50 crc kubenswrapper[4923]: I0321 04:32:50.049150 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ea44820-b94e-48e8-866b-5ac1b3c07c6e-utilities\") pod \"certified-operators-45mj4\" (UID: \"4ea44820-b94e-48e8-866b-5ac1b3c07c6e\") " pod="openshift-marketplace/certified-operators-45mj4" Mar 21 04:32:50 crc kubenswrapper[4923]: I0321 04:32:50.049531 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ea44820-b94e-48e8-866b-5ac1b3c07c6e-utilities\") pod \"certified-operators-45mj4\" (UID: \"4ea44820-b94e-48e8-866b-5ac1b3c07c6e\") " pod="openshift-marketplace/certified-operators-45mj4" Mar 21 04:32:50 crc kubenswrapper[4923]: I0321 04:32:50.049959 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ea44820-b94e-48e8-866b-5ac1b3c07c6e-catalog-content\") pod \"certified-operators-45mj4\" (UID: \"4ea44820-b94e-48e8-866b-5ac1b3c07c6e\") " pod="openshift-marketplace/certified-operators-45mj4" Mar 21 04:32:50 crc kubenswrapper[4923]: I0321 04:32:50.066750 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qdb9\" (UniqueName: \"kubernetes.io/projected/4ea44820-b94e-48e8-866b-5ac1b3c07c6e-kube-api-access-4qdb9\") pod \"certified-operators-45mj4\" (UID: \"4ea44820-b94e-48e8-866b-5ac1b3c07c6e\") " pod="openshift-marketplace/certified-operators-45mj4" Mar 21 04:32:50 crc kubenswrapper[4923]: I0321 04:32:50.165138 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-45mj4" Mar 21 04:32:50 crc kubenswrapper[4923]: I0321 04:32:50.537548 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-2" event={"ID":"216ff735-f76d-413a-bff8-8e0dfd4177c2","Type":"ContainerStarted","Data":"d2d47752695a7418eef6b77def1c5fe7f38251dbcc7064bcdfab145553e44ca2"} Mar 21 04:32:50 crc kubenswrapper[4923]: I0321 04:32:50.538911 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-0" event={"ID":"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0","Type":"ContainerStarted","Data":"8728941bb7ac0272be4ef2e7a5ad04b2ff20128c08b2d09a809a80ea98990734"} Mar 21 04:32:50 crc kubenswrapper[4923]: I0321 04:32:50.540196 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-1" event={"ID":"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d","Type":"ContainerStarted","Data":"558281794c538bc786bd95ab7ca67e12cc9cd51b4f5f3788849264ef7aabda98"} Mar 21 04:32:50 crc kubenswrapper[4923]: I0321 04:32:50.572044 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/openstack-galera-2" podStartSLOduration=7.806362224 podStartE2EDuration="15.572030223s" podCreationTimestamp="2026-03-21 04:32:35 +0000 UTC" firstStartedPulling="2026-03-21 04:32:37.41190777 +0000 UTC m=+922.564918857" lastFinishedPulling="2026-03-21 04:32:45.177575769 +0000 UTC m=+930.330586856" observedRunningTime="2026-03-21 04:32:50.56704937 +0000 UTC m=+935.720060457" watchObservedRunningTime="2026-03-21 04:32:50.572030223 +0000 UTC m=+935.725041310" Mar 21 04:32:50 crc kubenswrapper[4923]: I0321 04:32:50.588407 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/openstack-galera-1" podStartSLOduration=7.509139814 podStartE2EDuration="15.588389745s" podCreationTimestamp="2026-03-21 04:32:35 +0000 UTC" firstStartedPulling="2026-03-21 04:32:37.118954823 +0000 UTC m=+922.271965910" lastFinishedPulling="2026-03-21 04:32:45.198204754 +0000 UTC m=+930.351215841" observedRunningTime="2026-03-21 04:32:50.586080609 +0000 UTC m=+935.739091696" watchObservedRunningTime="2026-03-21 04:32:50.588389745 +0000 UTC m=+935.741400842" Mar 21 04:32:50 crc kubenswrapper[4923]: I0321 04:32:50.609468 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/openstack-galera-0" podStartSLOduration=7.463480788 podStartE2EDuration="15.609451282s" podCreationTimestamp="2026-03-21 04:32:35 +0000 UTC" firstStartedPulling="2026-03-21 04:32:37.071352261 +0000 UTC m=+922.224363368" lastFinishedPulling="2026-03-21 04:32:45.217322775 +0000 UTC m=+930.370333862" observedRunningTime="2026-03-21 04:32:50.60868806 +0000 UTC m=+935.761699137" watchObservedRunningTime="2026-03-21 04:32:50.609451282 +0000 UTC m=+935.762462369" Mar 21 04:32:50 crc kubenswrapper[4923]: I0321 04:32:50.688976 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-45mj4"] Mar 21 04:32:50 crc kubenswrapper[4923]: W0321 04:32:50.692172 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ea44820_b94e_48e8_866b_5ac1b3c07c6e.slice/crio-3b67821394272e3d4b58700ebe9a5fa79b8350f2b45d56825833878354a459f4 WatchSource:0}: Error finding container 3b67821394272e3d4b58700ebe9a5fa79b8350f2b45d56825833878354a459f4: Status 404 returned error can't find the container with id 3b67821394272e3d4b58700ebe9a5fa79b8350f2b45d56825833878354a459f4 Mar 21 04:32:51 crc kubenswrapper[4923]: I0321 04:32:51.549787 4923 generic.go:334] "Generic (PLEG): container finished" podID="4ea44820-b94e-48e8-866b-5ac1b3c07c6e" containerID="fc829ce099658b1658c8caa8c32987ff4812fa861bec954578c8dfbca99aa72c" exitCode=0 Mar 21 04:32:51 crc kubenswrapper[4923]: I0321 04:32:51.550251 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-45mj4" event={"ID":"4ea44820-b94e-48e8-866b-5ac1b3c07c6e","Type":"ContainerDied","Data":"fc829ce099658b1658c8caa8c32987ff4812fa861bec954578c8dfbca99aa72c"} Mar 21 04:32:51 crc kubenswrapper[4923]: I0321 04:32:51.550389 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-45mj4" event={"ID":"4ea44820-b94e-48e8-866b-5ac1b3c07c6e","Type":"ContainerStarted","Data":"3b67821394272e3d4b58700ebe9a5fa79b8350f2b45d56825833878354a459f4"} Mar 21 04:32:52 crc kubenswrapper[4923]: I0321 04:32:52.563178 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-45mj4" event={"ID":"4ea44820-b94e-48e8-866b-5ac1b3c07c6e","Type":"ContainerStarted","Data":"b9afa6c04548812fdfb32faeb7bf585d008dafc49ea7a5ae429b012471ac034d"} Mar 21 04:32:53 crc kubenswrapper[4923]: I0321 04:32:53.570522 4923 generic.go:334] "Generic (PLEG): container finished" podID="4ea44820-b94e-48e8-866b-5ac1b3c07c6e" containerID="b9afa6c04548812fdfb32faeb7bf585d008dafc49ea7a5ae429b012471ac034d" exitCode=0 Mar 21 04:32:53 crc kubenswrapper[4923]: I0321 04:32:53.570609 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-45mj4" event={"ID":"4ea44820-b94e-48e8-866b-5ac1b3c07c6e","Type":"ContainerDied","Data":"b9afa6c04548812fdfb32faeb7bf585d008dafc49ea7a5ae429b012471ac034d"} Mar 21 04:32:56 crc kubenswrapper[4923]: I0321 04:32:56.594096 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-45mj4" event={"ID":"4ea44820-b94e-48e8-866b-5ac1b3c07c6e","Type":"ContainerStarted","Data":"cdb72b0881e26a0123c0525a3cb3a705b2bfb372ffb77581cd2a25c7851a1de3"} Mar 21 04:32:56 crc kubenswrapper[4923]: I0321 04:32:56.618263 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-45mj4" podStartSLOduration=3.575403004 podStartE2EDuration="7.618239698s" podCreationTimestamp="2026-03-21 04:32:49 +0000 UTC" firstStartedPulling="2026-03-21 04:32:51.55171639 +0000 UTC m=+936.704727487" lastFinishedPulling="2026-03-21 04:32:55.594553094 +0000 UTC m=+940.747564181" observedRunningTime="2026-03-21 04:32:56.61553013 +0000 UTC m=+941.768541247" watchObservedRunningTime="2026-03-21 04:32:56.618239698 +0000 UTC m=+941.771250825" Mar 21 04:32:56 crc kubenswrapper[4923]: I0321 04:32:56.844706 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="horizon-kuttl-tests/openstack-galera-0" Mar 21 04:32:56 crc kubenswrapper[4923]: I0321 04:32:56.844747 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="horizon-kuttl-tests/openstack-galera-0" Mar 21 04:32:56 crc kubenswrapper[4923]: I0321 04:32:56.879935 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="horizon-kuttl-tests/openstack-galera-1" Mar 21 04:32:56 crc kubenswrapper[4923]: I0321 04:32:56.880010 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="horizon-kuttl-tests/openstack-galera-1" Mar 21 04:32:57 crc kubenswrapper[4923]: I0321 04:32:57.170493 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="horizon-kuttl-tests/openstack-galera-2" Mar 21 04:32:57 crc kubenswrapper[4923]: I0321 04:32:57.170532 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="horizon-kuttl-tests/openstack-galera-2" Mar 21 04:32:59 crc kubenswrapper[4923]: I0321 04:32:59.481485 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="horizon-kuttl-tests/openstack-galera-2" Mar 21 04:32:59 crc kubenswrapper[4923]: I0321 04:32:59.577945 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="horizon-kuttl-tests/openstack-galera-2" Mar 21 04:33:00 crc kubenswrapper[4923]: I0321 04:33:00.165918 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-45mj4" Mar 21 04:33:00 crc kubenswrapper[4923]: I0321 04:33:00.165979 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-45mj4" Mar 21 04:33:00 crc kubenswrapper[4923]: I0321 04:33:00.207396 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-45mj4" Mar 21 04:33:00 crc kubenswrapper[4923]: I0321 04:33:00.677990 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-45mj4" Mar 21 04:33:00 crc kubenswrapper[4923]: I0321 04:33:00.724254 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-45mj4"] Mar 21 04:33:01 crc kubenswrapper[4923]: I0321 04:33:01.590850 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-5f474f7cc-h8gg9" Mar 21 04:33:02 crc kubenswrapper[4923]: I0321 04:33:02.670944 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-45mj4" podUID="4ea44820-b94e-48e8-866b-5ac1b3c07c6e" containerName="registry-server" containerID="cri-o://cdb72b0881e26a0123c0525a3cb3a705b2bfb372ffb77581cd2a25c7851a1de3" gracePeriod=2 Mar 21 04:33:03 crc kubenswrapper[4923]: I0321 04:33:03.047022 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-45mj4" Mar 21 04:33:03 crc kubenswrapper[4923]: I0321 04:33:03.134066 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qdb9\" (UniqueName: \"kubernetes.io/projected/4ea44820-b94e-48e8-866b-5ac1b3c07c6e-kube-api-access-4qdb9\") pod \"4ea44820-b94e-48e8-866b-5ac1b3c07c6e\" (UID: \"4ea44820-b94e-48e8-866b-5ac1b3c07c6e\") " Mar 21 04:33:03 crc kubenswrapper[4923]: I0321 04:33:03.134187 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ea44820-b94e-48e8-866b-5ac1b3c07c6e-utilities\") pod \"4ea44820-b94e-48e8-866b-5ac1b3c07c6e\" (UID: \"4ea44820-b94e-48e8-866b-5ac1b3c07c6e\") " Mar 21 04:33:03 crc kubenswrapper[4923]: I0321 04:33:03.134224 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ea44820-b94e-48e8-866b-5ac1b3c07c6e-catalog-content\") pod \"4ea44820-b94e-48e8-866b-5ac1b3c07c6e\" (UID: \"4ea44820-b94e-48e8-866b-5ac1b3c07c6e\") " Mar 21 04:33:03 crc kubenswrapper[4923]: I0321 04:33:03.135012 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ea44820-b94e-48e8-866b-5ac1b3c07c6e-utilities" (OuterVolumeSpecName: "utilities") pod "4ea44820-b94e-48e8-866b-5ac1b3c07c6e" (UID: "4ea44820-b94e-48e8-866b-5ac1b3c07c6e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:33:03 crc kubenswrapper[4923]: I0321 04:33:03.139315 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ea44820-b94e-48e8-866b-5ac1b3c07c6e-kube-api-access-4qdb9" (OuterVolumeSpecName: "kube-api-access-4qdb9") pod "4ea44820-b94e-48e8-866b-5ac1b3c07c6e" (UID: "4ea44820-b94e-48e8-866b-5ac1b3c07c6e"). InnerVolumeSpecName "kube-api-access-4qdb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:33:03 crc kubenswrapper[4923]: I0321 04:33:03.192820 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ea44820-b94e-48e8-866b-5ac1b3c07c6e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ea44820-b94e-48e8-866b-5ac1b3c07c6e" (UID: "4ea44820-b94e-48e8-866b-5ac1b3c07c6e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:33:03 crc kubenswrapper[4923]: I0321 04:33:03.235932 4923 patch_prober.go:28] interesting pod/machine-config-daemon-cv5gr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 04:33:03 crc kubenswrapper[4923]: I0321 04:33:03.236005 4923 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 04:33:03 crc kubenswrapper[4923]: I0321 04:33:03.236162 4923 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ea44820-b94e-48e8-866b-5ac1b3c07c6e-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 04:33:03 crc kubenswrapper[4923]: I0321 04:33:03.236188 4923 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ea44820-b94e-48e8-866b-5ac1b3c07c6e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 04:33:03 crc kubenswrapper[4923]: I0321 04:33:03.236199 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qdb9\" (UniqueName: \"kubernetes.io/projected/4ea44820-b94e-48e8-866b-5ac1b3c07c6e-kube-api-access-4qdb9\") on node \"crc\" DevicePath \"\"" Mar 21 04:33:03 crc kubenswrapper[4923]: I0321 04:33:03.685235 4923 generic.go:334] "Generic (PLEG): container finished" podID="4ea44820-b94e-48e8-866b-5ac1b3c07c6e" containerID="cdb72b0881e26a0123c0525a3cb3a705b2bfb372ffb77581cd2a25c7851a1de3" exitCode=0 Mar 21 04:33:03 crc kubenswrapper[4923]: I0321 04:33:03.685288 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-45mj4" event={"ID":"4ea44820-b94e-48e8-866b-5ac1b3c07c6e","Type":"ContainerDied","Data":"cdb72b0881e26a0123c0525a3cb3a705b2bfb372ffb77581cd2a25c7851a1de3"} Mar 21 04:33:03 crc kubenswrapper[4923]: I0321 04:33:03.685398 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-45mj4" event={"ID":"4ea44820-b94e-48e8-866b-5ac1b3c07c6e","Type":"ContainerDied","Data":"3b67821394272e3d4b58700ebe9a5fa79b8350f2b45d56825833878354a459f4"} Mar 21 04:33:03 crc kubenswrapper[4923]: I0321 04:33:03.685424 4923 scope.go:117] "RemoveContainer" containerID="cdb72b0881e26a0123c0525a3cb3a705b2bfb372ffb77581cd2a25c7851a1de3" Mar 21 04:33:03 crc kubenswrapper[4923]: I0321 04:33:03.685487 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-45mj4" Mar 21 04:33:03 crc kubenswrapper[4923]: I0321 04:33:03.717761 4923 scope.go:117] "RemoveContainer" containerID="b9afa6c04548812fdfb32faeb7bf585d008dafc49ea7a5ae429b012471ac034d" Mar 21 04:33:03 crc kubenswrapper[4923]: I0321 04:33:03.751626 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-45mj4"] Mar 21 04:33:03 crc kubenswrapper[4923]: I0321 04:33:03.762140 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-45mj4"] Mar 21 04:33:03 crc kubenswrapper[4923]: I0321 04:33:03.766651 4923 scope.go:117] "RemoveContainer" containerID="fc829ce099658b1658c8caa8c32987ff4812fa861bec954578c8dfbca99aa72c" Mar 21 04:33:03 crc kubenswrapper[4923]: I0321 04:33:03.800548 4923 scope.go:117] "RemoveContainer" containerID="cdb72b0881e26a0123c0525a3cb3a705b2bfb372ffb77581cd2a25c7851a1de3" Mar 21 04:33:03 crc kubenswrapper[4923]: E0321 04:33:03.801028 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdb72b0881e26a0123c0525a3cb3a705b2bfb372ffb77581cd2a25c7851a1de3\": container with ID starting with cdb72b0881e26a0123c0525a3cb3a705b2bfb372ffb77581cd2a25c7851a1de3 not found: ID does not exist" containerID="cdb72b0881e26a0123c0525a3cb3a705b2bfb372ffb77581cd2a25c7851a1de3" Mar 21 04:33:03 crc kubenswrapper[4923]: I0321 04:33:03.801067 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdb72b0881e26a0123c0525a3cb3a705b2bfb372ffb77581cd2a25c7851a1de3"} err="failed to get container status \"cdb72b0881e26a0123c0525a3cb3a705b2bfb372ffb77581cd2a25c7851a1de3\": rpc error: code = NotFound desc = could not find container \"cdb72b0881e26a0123c0525a3cb3a705b2bfb372ffb77581cd2a25c7851a1de3\": container with ID starting with cdb72b0881e26a0123c0525a3cb3a705b2bfb372ffb77581cd2a25c7851a1de3 not found: ID does not exist" Mar 21 04:33:03 crc kubenswrapper[4923]: I0321 04:33:03.801091 4923 scope.go:117] "RemoveContainer" containerID="b9afa6c04548812fdfb32faeb7bf585d008dafc49ea7a5ae429b012471ac034d" Mar 21 04:33:03 crc kubenswrapper[4923]: E0321 04:33:03.801445 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9afa6c04548812fdfb32faeb7bf585d008dafc49ea7a5ae429b012471ac034d\": container with ID starting with b9afa6c04548812fdfb32faeb7bf585d008dafc49ea7a5ae429b012471ac034d not found: ID does not exist" containerID="b9afa6c04548812fdfb32faeb7bf585d008dafc49ea7a5ae429b012471ac034d" Mar 21 04:33:03 crc kubenswrapper[4923]: I0321 04:33:03.801539 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9afa6c04548812fdfb32faeb7bf585d008dafc49ea7a5ae429b012471ac034d"} err="failed to get container status \"b9afa6c04548812fdfb32faeb7bf585d008dafc49ea7a5ae429b012471ac034d\": rpc error: code = NotFound desc = could not find container \"b9afa6c04548812fdfb32faeb7bf585d008dafc49ea7a5ae429b012471ac034d\": container with ID starting with b9afa6c04548812fdfb32faeb7bf585d008dafc49ea7a5ae429b012471ac034d not found: ID does not exist" Mar 21 04:33:03 crc kubenswrapper[4923]: I0321 04:33:03.801592 4923 scope.go:117] "RemoveContainer" containerID="fc829ce099658b1658c8caa8c32987ff4812fa861bec954578c8dfbca99aa72c" Mar 21 04:33:03 crc kubenswrapper[4923]: E0321 04:33:03.801947 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc829ce099658b1658c8caa8c32987ff4812fa861bec954578c8dfbca99aa72c\": container with ID starting with fc829ce099658b1658c8caa8c32987ff4812fa861bec954578c8dfbca99aa72c not found: ID does not exist" containerID="fc829ce099658b1658c8caa8c32987ff4812fa861bec954578c8dfbca99aa72c" Mar 21 04:33:03 crc kubenswrapper[4923]: I0321 04:33:03.801969 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc829ce099658b1658c8caa8c32987ff4812fa861bec954578c8dfbca99aa72c"} err="failed to get container status \"fc829ce099658b1658c8caa8c32987ff4812fa861bec954578c8dfbca99aa72c\": rpc error: code = NotFound desc = could not find container \"fc829ce099658b1658c8caa8c32987ff4812fa861bec954578c8dfbca99aa72c\": container with ID starting with fc829ce099658b1658c8caa8c32987ff4812fa861bec954578c8dfbca99aa72c not found: ID does not exist" Mar 21 04:33:04 crc kubenswrapper[4923]: I0321 04:33:04.368436 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ea44820-b94e-48e8-866b-5ac1b3c07c6e" path="/var/lib/kubelet/pods/4ea44820-b94e-48e8-866b-5ac1b3c07c6e/volumes" Mar 21 04:33:04 crc kubenswrapper[4923]: I0321 04:33:04.568818 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/memcached-0"] Mar 21 04:33:04 crc kubenswrapper[4923]: E0321 04:33:04.569059 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea44820-b94e-48e8-866b-5ac1b3c07c6e" containerName="extract-content" Mar 21 04:33:04 crc kubenswrapper[4923]: I0321 04:33:04.569073 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea44820-b94e-48e8-866b-5ac1b3c07c6e" containerName="extract-content" Mar 21 04:33:04 crc kubenswrapper[4923]: E0321 04:33:04.569084 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea44820-b94e-48e8-866b-5ac1b3c07c6e" containerName="extract-utilities" Mar 21 04:33:04 crc kubenswrapper[4923]: I0321 04:33:04.569091 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea44820-b94e-48e8-866b-5ac1b3c07c6e" containerName="extract-utilities" Mar 21 04:33:04 crc kubenswrapper[4923]: E0321 04:33:04.569099 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea44820-b94e-48e8-866b-5ac1b3c07c6e" containerName="registry-server" Mar 21 04:33:04 crc kubenswrapper[4923]: I0321 04:33:04.569106 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea44820-b94e-48e8-866b-5ac1b3c07c6e" containerName="registry-server" Mar 21 04:33:04 crc kubenswrapper[4923]: I0321 04:33:04.569207 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ea44820-b94e-48e8-866b-5ac1b3c07c6e" containerName="registry-server" Mar 21 04:33:04 crc kubenswrapper[4923]: I0321 04:33:04.569621 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/memcached-0" Mar 21 04:33:04 crc kubenswrapper[4923]: I0321 04:33:04.571943 4923 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"memcached-memcached-dockercfg-78lkc" Mar 21 04:33:04 crc kubenswrapper[4923]: I0321 04:33:04.572206 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"horizon-kuttl-tests"/"memcached-config-data" Mar 21 04:33:04 crc kubenswrapper[4923]: I0321 04:33:04.582976 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/memcached-0"] Mar 21 04:33:04 crc kubenswrapper[4923]: I0321 04:33:04.657160 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/09f2119f-aa10-497e-bc3b-547681df5eb4-config-data\") pod \"memcached-0\" (UID: \"09f2119f-aa10-497e-bc3b-547681df5eb4\") " pod="horizon-kuttl-tests/memcached-0" Mar 21 04:33:04 crc kubenswrapper[4923]: I0321 04:33:04.657247 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6vd8\" (UniqueName: \"kubernetes.io/projected/09f2119f-aa10-497e-bc3b-547681df5eb4-kube-api-access-c6vd8\") pod \"memcached-0\" (UID: \"09f2119f-aa10-497e-bc3b-547681df5eb4\") " pod="horizon-kuttl-tests/memcached-0" Mar 21 04:33:04 crc kubenswrapper[4923]: I0321 04:33:04.657290 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/09f2119f-aa10-497e-bc3b-547681df5eb4-kolla-config\") pod \"memcached-0\" (UID: \"09f2119f-aa10-497e-bc3b-547681df5eb4\") " pod="horizon-kuttl-tests/memcached-0" Mar 21 04:33:04 crc kubenswrapper[4923]: I0321 04:33:04.758610 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/09f2119f-aa10-497e-bc3b-547681df5eb4-config-data\") pod \"memcached-0\" (UID: \"09f2119f-aa10-497e-bc3b-547681df5eb4\") " pod="horizon-kuttl-tests/memcached-0" Mar 21 04:33:04 crc kubenswrapper[4923]: I0321 04:33:04.758675 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6vd8\" (UniqueName: \"kubernetes.io/projected/09f2119f-aa10-497e-bc3b-547681df5eb4-kube-api-access-c6vd8\") pod \"memcached-0\" (UID: \"09f2119f-aa10-497e-bc3b-547681df5eb4\") " pod="horizon-kuttl-tests/memcached-0" Mar 21 04:33:04 crc kubenswrapper[4923]: I0321 04:33:04.758705 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/09f2119f-aa10-497e-bc3b-547681df5eb4-kolla-config\") pod \"memcached-0\" (UID: \"09f2119f-aa10-497e-bc3b-547681df5eb4\") " pod="horizon-kuttl-tests/memcached-0" Mar 21 04:33:04 crc kubenswrapper[4923]: I0321 04:33:04.759461 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/09f2119f-aa10-497e-bc3b-547681df5eb4-config-data\") pod \"memcached-0\" (UID: \"09f2119f-aa10-497e-bc3b-547681df5eb4\") " pod="horizon-kuttl-tests/memcached-0" Mar 21 04:33:04 crc kubenswrapper[4923]: I0321 04:33:04.759910 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/09f2119f-aa10-497e-bc3b-547681df5eb4-kolla-config\") pod \"memcached-0\" (UID: \"09f2119f-aa10-497e-bc3b-547681df5eb4\") " pod="horizon-kuttl-tests/memcached-0" Mar 21 04:33:04 crc kubenswrapper[4923]: I0321 04:33:04.797451 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6vd8\" (UniqueName: \"kubernetes.io/projected/09f2119f-aa10-497e-bc3b-547681df5eb4-kube-api-access-c6vd8\") pod \"memcached-0\" (UID: \"09f2119f-aa10-497e-bc3b-547681df5eb4\") " pod="horizon-kuttl-tests/memcached-0" Mar 21 04:33:04 crc kubenswrapper[4923]: I0321 04:33:04.892655 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/memcached-0" Mar 21 04:33:05 crc kubenswrapper[4923]: I0321 04:33:05.304121 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/memcached-0"] Mar 21 04:33:05 crc kubenswrapper[4923]: I0321 04:33:05.600066 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/root-account-create-update-8ftdm"] Mar 21 04:33:05 crc kubenswrapper[4923]: I0321 04:33:05.601113 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/root-account-create-update-8ftdm" Mar 21 04:33:05 crc kubenswrapper[4923]: I0321 04:33:05.603778 4923 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"openstack-mariadb-root-db-secret" Mar 21 04:33:05 crc kubenswrapper[4923]: I0321 04:33:05.605710 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/root-account-create-update-8ftdm"] Mar 21 04:33:05 crc kubenswrapper[4923]: I0321 04:33:05.670962 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh8rt\" (UniqueName: \"kubernetes.io/projected/7331201d-612f-4047-b9f6-15634dddeebc-kube-api-access-wh8rt\") pod \"root-account-create-update-8ftdm\" (UID: \"7331201d-612f-4047-b9f6-15634dddeebc\") " pod="horizon-kuttl-tests/root-account-create-update-8ftdm" Mar 21 04:33:05 crc kubenswrapper[4923]: I0321 04:33:05.671076 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7331201d-612f-4047-b9f6-15634dddeebc-operator-scripts\") pod \"root-account-create-update-8ftdm\" (UID: \"7331201d-612f-4047-b9f6-15634dddeebc\") " pod="horizon-kuttl-tests/root-account-create-update-8ftdm" Mar 21 04:33:05 crc kubenswrapper[4923]: I0321 04:33:05.699778 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/memcached-0" event={"ID":"09f2119f-aa10-497e-bc3b-547681df5eb4","Type":"ContainerStarted","Data":"6d794cf48ce6618eb5d093ad5d0e1a20cf3e36f228da2d93ae004a98ae072dab"} Mar 21 04:33:05 crc kubenswrapper[4923]: I0321 04:33:05.772281 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7331201d-612f-4047-b9f6-15634dddeebc-operator-scripts\") pod \"root-account-create-update-8ftdm\" (UID: \"7331201d-612f-4047-b9f6-15634dddeebc\") " pod="horizon-kuttl-tests/root-account-create-update-8ftdm" Mar 21 04:33:05 crc kubenswrapper[4923]: I0321 04:33:05.772376 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh8rt\" (UniqueName: \"kubernetes.io/projected/7331201d-612f-4047-b9f6-15634dddeebc-kube-api-access-wh8rt\") pod \"root-account-create-update-8ftdm\" (UID: \"7331201d-612f-4047-b9f6-15634dddeebc\") " pod="horizon-kuttl-tests/root-account-create-update-8ftdm" Mar 21 04:33:05 crc kubenswrapper[4923]: I0321 04:33:05.773137 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7331201d-612f-4047-b9f6-15634dddeebc-operator-scripts\") pod \"root-account-create-update-8ftdm\" (UID: \"7331201d-612f-4047-b9f6-15634dddeebc\") " pod="horizon-kuttl-tests/root-account-create-update-8ftdm" Mar 21 04:33:05 crc kubenswrapper[4923]: I0321 04:33:05.809049 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh8rt\" (UniqueName: \"kubernetes.io/projected/7331201d-612f-4047-b9f6-15634dddeebc-kube-api-access-wh8rt\") pod \"root-account-create-update-8ftdm\" (UID: \"7331201d-612f-4047-b9f6-15634dddeebc\") " pod="horizon-kuttl-tests/root-account-create-update-8ftdm" Mar 21 04:33:05 crc kubenswrapper[4923]: I0321 04:33:05.928910 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/root-account-create-update-8ftdm" Mar 21 04:33:05 crc kubenswrapper[4923]: I0321 04:33:05.989862 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="horizon-kuttl-tests/openstack-galera-1" Mar 21 04:33:06 crc kubenswrapper[4923]: I0321 04:33:06.076520 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="horizon-kuttl-tests/openstack-galera-1" Mar 21 04:33:06 crc kubenswrapper[4923]: I0321 04:33:06.296680 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/root-account-create-update-8ftdm"] Mar 21 04:33:06 crc kubenswrapper[4923]: I0321 04:33:06.707343 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/root-account-create-update-8ftdm" event={"ID":"7331201d-612f-4047-b9f6-15634dddeebc","Type":"ContainerStarted","Data":"8a23cca9f8e8fb84e508324780dc96a0d68bb7138af81924407226d9ed8282d0"} Mar 21 04:33:06 crc kubenswrapper[4923]: I0321 04:33:06.707412 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/root-account-create-update-8ftdm" event={"ID":"7331201d-612f-4047-b9f6-15634dddeebc","Type":"ContainerStarted","Data":"d8528fbd79643bc2180c5187e2b3d60d787ddfcffd5e13182b22f28b2f36279d"} Mar 21 04:33:06 crc kubenswrapper[4923]: I0321 04:33:06.731290 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/root-account-create-update-8ftdm" podStartSLOduration=1.7312675880000001 podStartE2EDuration="1.731267588s" podCreationTimestamp="2026-03-21 04:33:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:33:06.727675074 +0000 UTC m=+951.880686161" watchObservedRunningTime="2026-03-21 04:33:06.731267588 +0000 UTC m=+951.884278685" Mar 21 04:33:07 crc kubenswrapper[4923]: I0321 04:33:07.247443 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="horizon-kuttl-tests/openstack-galera-2" podUID="216ff735-f76d-413a-bff8-8e0dfd4177c2" containerName="galera" probeResult="failure" output=< Mar 21 04:33:07 crc kubenswrapper[4923]: wsrep_local_state_comment (Donor/Desynced) differs from Synced Mar 21 04:33:07 crc kubenswrapper[4923]: > Mar 21 04:33:08 crc kubenswrapper[4923]: I0321 04:33:08.442687 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-z4l2x"] Mar 21 04:33:08 crc kubenswrapper[4923]: I0321 04:33:08.443681 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-z4l2x" Mar 21 04:33:08 crc kubenswrapper[4923]: I0321 04:33:08.445964 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-index-dockercfg-gdz6v" Mar 21 04:33:08 crc kubenswrapper[4923]: I0321 04:33:08.454855 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-z4l2x"] Mar 21 04:33:08 crc kubenswrapper[4923]: I0321 04:33:08.528926 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvszt\" (UniqueName: \"kubernetes.io/projected/edc9a056-337a-48fd-8a1c-9b12d2e12aa0-kube-api-access-dvszt\") pod \"rabbitmq-cluster-operator-index-z4l2x\" (UID: \"edc9a056-337a-48fd-8a1c-9b12d2e12aa0\") " pod="openstack-operators/rabbitmq-cluster-operator-index-z4l2x" Mar 21 04:33:08 crc kubenswrapper[4923]: I0321 04:33:08.630442 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvszt\" (UniqueName: \"kubernetes.io/projected/edc9a056-337a-48fd-8a1c-9b12d2e12aa0-kube-api-access-dvszt\") pod \"rabbitmq-cluster-operator-index-z4l2x\" (UID: \"edc9a056-337a-48fd-8a1c-9b12d2e12aa0\") " pod="openstack-operators/rabbitmq-cluster-operator-index-z4l2x" Mar 21 04:33:08 crc kubenswrapper[4923]: I0321 04:33:08.649108 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvszt\" (UniqueName: \"kubernetes.io/projected/edc9a056-337a-48fd-8a1c-9b12d2e12aa0-kube-api-access-dvszt\") pod \"rabbitmq-cluster-operator-index-z4l2x\" (UID: \"edc9a056-337a-48fd-8a1c-9b12d2e12aa0\") " pod="openstack-operators/rabbitmq-cluster-operator-index-z4l2x" Mar 21 04:33:08 crc kubenswrapper[4923]: I0321 04:33:08.721524 4923 generic.go:334] "Generic (PLEG): container finished" podID="7331201d-612f-4047-b9f6-15634dddeebc" containerID="8a23cca9f8e8fb84e508324780dc96a0d68bb7138af81924407226d9ed8282d0" exitCode=0 Mar 21 04:33:08 crc kubenswrapper[4923]: I0321 04:33:08.721611 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/root-account-create-update-8ftdm" event={"ID":"7331201d-612f-4047-b9f6-15634dddeebc","Type":"ContainerDied","Data":"8a23cca9f8e8fb84e508324780dc96a0d68bb7138af81924407226d9ed8282d0"} Mar 21 04:33:08 crc kubenswrapper[4923]: I0321 04:33:08.723274 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/memcached-0" event={"ID":"09f2119f-aa10-497e-bc3b-547681df5eb4","Type":"ContainerStarted","Data":"b42c65f9194744583b1264e188b905d6c97e4448e0ab27b93d99694bc4fbaa7d"} Mar 21 04:33:08 crc kubenswrapper[4923]: I0321 04:33:08.723433 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="horizon-kuttl-tests/memcached-0" Mar 21 04:33:08 crc kubenswrapper[4923]: I0321 04:33:08.755805 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/memcached-0" podStartSLOduration=2.491719881 podStartE2EDuration="4.755790339s" podCreationTimestamp="2026-03-21 04:33:04 +0000 UTC" firstStartedPulling="2026-03-21 04:33:05.323212861 +0000 UTC m=+950.476223988" lastFinishedPulling="2026-03-21 04:33:07.587283359 +0000 UTC m=+952.740294446" observedRunningTime="2026-03-21 04:33:08.755631725 +0000 UTC m=+953.908642812" watchObservedRunningTime="2026-03-21 04:33:08.755790339 +0000 UTC m=+953.908801426" Mar 21 04:33:08 crc kubenswrapper[4923]: I0321 04:33:08.799779 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-z4l2x" Mar 21 04:33:09 crc kubenswrapper[4923]: I0321 04:33:09.275214 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-z4l2x"] Mar 21 04:33:09 crc kubenswrapper[4923]: I0321 04:33:09.728788 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-z4l2x" event={"ID":"edc9a056-337a-48fd-8a1c-9b12d2e12aa0","Type":"ContainerStarted","Data":"6b739f7b6917c9147c2681477b94c6135e0841a78718e63d1b814939a9affd44"} Mar 21 04:33:09 crc kubenswrapper[4923]: I0321 04:33:09.993014 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/root-account-create-update-8ftdm" Mar 21 04:33:10 crc kubenswrapper[4923]: I0321 04:33:10.048149 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh8rt\" (UniqueName: \"kubernetes.io/projected/7331201d-612f-4047-b9f6-15634dddeebc-kube-api-access-wh8rt\") pod \"7331201d-612f-4047-b9f6-15634dddeebc\" (UID: \"7331201d-612f-4047-b9f6-15634dddeebc\") " Mar 21 04:33:10 crc kubenswrapper[4923]: I0321 04:33:10.048229 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7331201d-612f-4047-b9f6-15634dddeebc-operator-scripts\") pod \"7331201d-612f-4047-b9f6-15634dddeebc\" (UID: \"7331201d-612f-4047-b9f6-15634dddeebc\") " Mar 21 04:33:10 crc kubenswrapper[4923]: I0321 04:33:10.048719 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7331201d-612f-4047-b9f6-15634dddeebc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7331201d-612f-4047-b9f6-15634dddeebc" (UID: "7331201d-612f-4047-b9f6-15634dddeebc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:33:10 crc kubenswrapper[4923]: I0321 04:33:10.054027 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7331201d-612f-4047-b9f6-15634dddeebc-kube-api-access-wh8rt" (OuterVolumeSpecName: "kube-api-access-wh8rt") pod "7331201d-612f-4047-b9f6-15634dddeebc" (UID: "7331201d-612f-4047-b9f6-15634dddeebc"). InnerVolumeSpecName "kube-api-access-wh8rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:33:10 crc kubenswrapper[4923]: I0321 04:33:10.150358 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh8rt\" (UniqueName: \"kubernetes.io/projected/7331201d-612f-4047-b9f6-15634dddeebc-kube-api-access-wh8rt\") on node \"crc\" DevicePath \"\"" Mar 21 04:33:10 crc kubenswrapper[4923]: I0321 04:33:10.150394 4923 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7331201d-612f-4047-b9f6-15634dddeebc-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 04:33:10 crc kubenswrapper[4923]: I0321 04:33:10.756573 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/root-account-create-update-8ftdm" event={"ID":"7331201d-612f-4047-b9f6-15634dddeebc","Type":"ContainerDied","Data":"d8528fbd79643bc2180c5187e2b3d60d787ddfcffd5e13182b22f28b2f36279d"} Mar 21 04:33:10 crc kubenswrapper[4923]: I0321 04:33:10.756615 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8528fbd79643bc2180c5187e2b3d60d787ddfcffd5e13182b22f28b2f36279d" Mar 21 04:33:10 crc kubenswrapper[4923]: I0321 04:33:10.756666 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/root-account-create-update-8ftdm" Mar 21 04:33:12 crc kubenswrapper[4923]: I0321 04:33:12.682148 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="horizon-kuttl-tests/openstack-galera-0" Mar 21 04:33:12 crc kubenswrapper[4923]: I0321 04:33:12.744884 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="horizon-kuttl-tests/openstack-galera-0" Mar 21 04:33:13 crc kubenswrapper[4923]: I0321 04:33:13.436561 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-z4l2x"] Mar 21 04:33:13 crc kubenswrapper[4923]: I0321 04:33:13.781798 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-z4l2x" event={"ID":"edc9a056-337a-48fd-8a1c-9b12d2e12aa0","Type":"ContainerStarted","Data":"22231da67db79009871f09b0bab8acd30bf7ee0b3914cc0daba68872956d0975"} Mar 21 04:33:13 crc kubenswrapper[4923]: I0321 04:33:13.781917 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-index-z4l2x" podUID="edc9a056-337a-48fd-8a1c-9b12d2e12aa0" containerName="registry-server" containerID="cri-o://22231da67db79009871f09b0bab8acd30bf7ee0b3914cc0daba68872956d0975" gracePeriod=2 Mar 21 04:33:13 crc kubenswrapper[4923]: I0321 04:33:13.798928 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-z4l2x" podStartSLOduration=1.4677686269999999 podStartE2EDuration="5.798905113s" podCreationTimestamp="2026-03-21 04:33:08 +0000 UTC" firstStartedPulling="2026-03-21 04:33:09.288748176 +0000 UTC m=+954.441759303" lastFinishedPulling="2026-03-21 04:33:13.619884712 +0000 UTC m=+958.772895789" observedRunningTime="2026-03-21 04:33:13.795514345 +0000 UTC m=+958.948525432" watchObservedRunningTime="2026-03-21 04:33:13.798905113 +0000 UTC m=+958.951916200" Mar 21 04:33:14 crc kubenswrapper[4923]: I0321 04:33:14.051410 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-5wtgm"] Mar 21 04:33:14 crc kubenswrapper[4923]: E0321 04:33:14.051789 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7331201d-612f-4047-b9f6-15634dddeebc" containerName="mariadb-account-create-update" Mar 21 04:33:14 crc kubenswrapper[4923]: I0321 04:33:14.051800 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="7331201d-612f-4047-b9f6-15634dddeebc" containerName="mariadb-account-create-update" Mar 21 04:33:14 crc kubenswrapper[4923]: I0321 04:33:14.051925 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="7331201d-612f-4047-b9f6-15634dddeebc" containerName="mariadb-account-create-update" Mar 21 04:33:14 crc kubenswrapper[4923]: I0321 04:33:14.052617 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-5wtgm" Mar 21 04:33:14 crc kubenswrapper[4923]: I0321 04:33:14.064613 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-5wtgm"] Mar 21 04:33:14 crc kubenswrapper[4923]: I0321 04:33:14.111114 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5zqr\" (UniqueName: \"kubernetes.io/projected/9faba93f-83d8-45ad-bea2-44e730e7f3a4-kube-api-access-j5zqr\") pod \"rabbitmq-cluster-operator-index-5wtgm\" (UID: \"9faba93f-83d8-45ad-bea2-44e730e7f3a4\") " pod="openstack-operators/rabbitmq-cluster-operator-index-5wtgm" Mar 21 04:33:14 crc kubenswrapper[4923]: I0321 04:33:14.212177 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5zqr\" (UniqueName: \"kubernetes.io/projected/9faba93f-83d8-45ad-bea2-44e730e7f3a4-kube-api-access-j5zqr\") pod \"rabbitmq-cluster-operator-index-5wtgm\" (UID: \"9faba93f-83d8-45ad-bea2-44e730e7f3a4\") " pod="openstack-operators/rabbitmq-cluster-operator-index-5wtgm" Mar 21 04:33:14 crc kubenswrapper[4923]: I0321 04:33:14.223764 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-z4l2x" Mar 21 04:33:14 crc kubenswrapper[4923]: I0321 04:33:14.232929 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5zqr\" (UniqueName: \"kubernetes.io/projected/9faba93f-83d8-45ad-bea2-44e730e7f3a4-kube-api-access-j5zqr\") pod \"rabbitmq-cluster-operator-index-5wtgm\" (UID: \"9faba93f-83d8-45ad-bea2-44e730e7f3a4\") " pod="openstack-operators/rabbitmq-cluster-operator-index-5wtgm" Mar 21 04:33:14 crc kubenswrapper[4923]: I0321 04:33:14.313294 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvszt\" (UniqueName: \"kubernetes.io/projected/edc9a056-337a-48fd-8a1c-9b12d2e12aa0-kube-api-access-dvszt\") pod \"edc9a056-337a-48fd-8a1c-9b12d2e12aa0\" (UID: \"edc9a056-337a-48fd-8a1c-9b12d2e12aa0\") " Mar 21 04:33:14 crc kubenswrapper[4923]: I0321 04:33:14.318970 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edc9a056-337a-48fd-8a1c-9b12d2e12aa0-kube-api-access-dvszt" (OuterVolumeSpecName: "kube-api-access-dvszt") pod "edc9a056-337a-48fd-8a1c-9b12d2e12aa0" (UID: "edc9a056-337a-48fd-8a1c-9b12d2e12aa0"). InnerVolumeSpecName "kube-api-access-dvszt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:33:14 crc kubenswrapper[4923]: I0321 04:33:14.382412 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-5wtgm" Mar 21 04:33:14 crc kubenswrapper[4923]: I0321 04:33:14.415633 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvszt\" (UniqueName: \"kubernetes.io/projected/edc9a056-337a-48fd-8a1c-9b12d2e12aa0-kube-api-access-dvszt\") on node \"crc\" DevicePath \"\"" Mar 21 04:33:14 crc kubenswrapper[4923]: I0321 04:33:14.704268 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-5wtgm"] Mar 21 04:33:14 crc kubenswrapper[4923]: I0321 04:33:14.789278 4923 generic.go:334] "Generic (PLEG): container finished" podID="edc9a056-337a-48fd-8a1c-9b12d2e12aa0" containerID="22231da67db79009871f09b0bab8acd30bf7ee0b3914cc0daba68872956d0975" exitCode=0 Mar 21 04:33:14 crc kubenswrapper[4923]: I0321 04:33:14.789362 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-z4l2x" event={"ID":"edc9a056-337a-48fd-8a1c-9b12d2e12aa0","Type":"ContainerDied","Data":"22231da67db79009871f09b0bab8acd30bf7ee0b3914cc0daba68872956d0975"} Mar 21 04:33:14 crc kubenswrapper[4923]: I0321 04:33:14.789390 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-z4l2x" event={"ID":"edc9a056-337a-48fd-8a1c-9b12d2e12aa0","Type":"ContainerDied","Data":"6b739f7b6917c9147c2681477b94c6135e0841a78718e63d1b814939a9affd44"} Mar 21 04:33:14 crc kubenswrapper[4923]: I0321 04:33:14.789409 4923 scope.go:117] "RemoveContainer" containerID="22231da67db79009871f09b0bab8acd30bf7ee0b3914cc0daba68872956d0975" Mar 21 04:33:14 crc kubenswrapper[4923]: I0321 04:33:14.789428 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-z4l2x" Mar 21 04:33:14 crc kubenswrapper[4923]: I0321 04:33:14.792090 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-5wtgm" event={"ID":"9faba93f-83d8-45ad-bea2-44e730e7f3a4","Type":"ContainerStarted","Data":"d7d0ce6c7f703bdbf49a583bb377876c6fb29cd459b4b30473f37eed59756d0b"} Mar 21 04:33:14 crc kubenswrapper[4923]: I0321 04:33:14.811663 4923 scope.go:117] "RemoveContainer" containerID="22231da67db79009871f09b0bab8acd30bf7ee0b3914cc0daba68872956d0975" Mar 21 04:33:14 crc kubenswrapper[4923]: E0321 04:33:14.812480 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22231da67db79009871f09b0bab8acd30bf7ee0b3914cc0daba68872956d0975\": container with ID starting with 22231da67db79009871f09b0bab8acd30bf7ee0b3914cc0daba68872956d0975 not found: ID does not exist" containerID="22231da67db79009871f09b0bab8acd30bf7ee0b3914cc0daba68872956d0975" Mar 21 04:33:14 crc kubenswrapper[4923]: I0321 04:33:14.812610 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22231da67db79009871f09b0bab8acd30bf7ee0b3914cc0daba68872956d0975"} err="failed to get container status \"22231da67db79009871f09b0bab8acd30bf7ee0b3914cc0daba68872956d0975\": rpc error: code = NotFound desc = could not find container \"22231da67db79009871f09b0bab8acd30bf7ee0b3914cc0daba68872956d0975\": container with ID starting with 22231da67db79009871f09b0bab8acd30bf7ee0b3914cc0daba68872956d0975 not found: ID does not exist" Mar 21 04:33:14 crc kubenswrapper[4923]: I0321 04:33:14.814265 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-z4l2x"] Mar 21 04:33:14 crc kubenswrapper[4923]: I0321 04:33:14.819789 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-z4l2x"] Mar 21 04:33:14 crc kubenswrapper[4923]: I0321 04:33:14.894134 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="horizon-kuttl-tests/memcached-0" Mar 21 04:33:15 crc kubenswrapper[4923]: I0321 04:33:15.802989 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-5wtgm" event={"ID":"9faba93f-83d8-45ad-bea2-44e730e7f3a4","Type":"ContainerStarted","Data":"9d2ac74ae9a53f961e8001f386d283b278f3f79067e8d495b0443ad2caa66c90"} Mar 21 04:33:15 crc kubenswrapper[4923]: I0321 04:33:15.827943 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-5wtgm" podStartSLOduration=1.404351731 podStartE2EDuration="1.827919694s" podCreationTimestamp="2026-03-21 04:33:14 +0000 UTC" firstStartedPulling="2026-03-21 04:33:14.705541493 +0000 UTC m=+959.858552580" lastFinishedPulling="2026-03-21 04:33:15.129109446 +0000 UTC m=+960.282120543" observedRunningTime="2026-03-21 04:33:15.821500099 +0000 UTC m=+960.974511236" watchObservedRunningTime="2026-03-21 04:33:15.827919694 +0000 UTC m=+960.980930791" Mar 21 04:33:16 crc kubenswrapper[4923]: I0321 04:33:16.371641 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edc9a056-337a-48fd-8a1c-9b12d2e12aa0" path="/var/lib/kubelet/pods/edc9a056-337a-48fd-8a1c-9b12d2e12aa0/volumes" Mar 21 04:33:24 crc kubenswrapper[4923]: I0321 04:33:24.382721 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/rabbitmq-cluster-operator-index-5wtgm" Mar 21 04:33:24 crc kubenswrapper[4923]: I0321 04:33:24.383427 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/rabbitmq-cluster-operator-index-5wtgm" Mar 21 04:33:24 crc kubenswrapper[4923]: I0321 04:33:24.418287 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/rabbitmq-cluster-operator-index-5wtgm" Mar 21 04:33:24 crc kubenswrapper[4923]: I0321 04:33:24.898940 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/rabbitmq-cluster-operator-index-5wtgm" Mar 21 04:33:28 crc kubenswrapper[4923]: I0321 04:33:28.279989 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96"] Mar 21 04:33:28 crc kubenswrapper[4923]: E0321 04:33:28.281472 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc9a056-337a-48fd-8a1c-9b12d2e12aa0" containerName="registry-server" Mar 21 04:33:28 crc kubenswrapper[4923]: I0321 04:33:28.281572 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc9a056-337a-48fd-8a1c-9b12d2e12aa0" containerName="registry-server" Mar 21 04:33:28 crc kubenswrapper[4923]: I0321 04:33:28.281746 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="edc9a056-337a-48fd-8a1c-9b12d2e12aa0" containerName="registry-server" Mar 21 04:33:28 crc kubenswrapper[4923]: I0321 04:33:28.282601 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96" Mar 21 04:33:28 crc kubenswrapper[4923]: I0321 04:33:28.286188 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-vcb7s" Mar 21 04:33:28 crc kubenswrapper[4923]: I0321 04:33:28.290999 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96"] Mar 21 04:33:28 crc kubenswrapper[4923]: I0321 04:33:28.429178 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v4ft\" (UniqueName: \"kubernetes.io/projected/028ed822-e3eb-47f2-9f87-ba89b0f4e3a7-kube-api-access-6v4ft\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96\" (UID: \"028ed822-e3eb-47f2-9f87-ba89b0f4e3a7\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96" Mar 21 04:33:28 crc kubenswrapper[4923]: I0321 04:33:28.429439 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/028ed822-e3eb-47f2-9f87-ba89b0f4e3a7-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96\" (UID: \"028ed822-e3eb-47f2-9f87-ba89b0f4e3a7\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96" Mar 21 04:33:28 crc kubenswrapper[4923]: I0321 04:33:28.429711 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/028ed822-e3eb-47f2-9f87-ba89b0f4e3a7-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96\" (UID: \"028ed822-e3eb-47f2-9f87-ba89b0f4e3a7\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96" Mar 21 04:33:28 crc kubenswrapper[4923]: I0321 04:33:28.530576 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v4ft\" (UniqueName: \"kubernetes.io/projected/028ed822-e3eb-47f2-9f87-ba89b0f4e3a7-kube-api-access-6v4ft\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96\" (UID: \"028ed822-e3eb-47f2-9f87-ba89b0f4e3a7\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96" Mar 21 04:33:28 crc kubenswrapper[4923]: I0321 04:33:28.530639 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/028ed822-e3eb-47f2-9f87-ba89b0f4e3a7-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96\" (UID: \"028ed822-e3eb-47f2-9f87-ba89b0f4e3a7\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96" Mar 21 04:33:28 crc kubenswrapper[4923]: I0321 04:33:28.530732 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/028ed822-e3eb-47f2-9f87-ba89b0f4e3a7-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96\" (UID: \"028ed822-e3eb-47f2-9f87-ba89b0f4e3a7\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96" Mar 21 04:33:28 crc kubenswrapper[4923]: I0321 04:33:28.531113 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/028ed822-e3eb-47f2-9f87-ba89b0f4e3a7-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96\" (UID: \"028ed822-e3eb-47f2-9f87-ba89b0f4e3a7\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96" Mar 21 04:33:28 crc kubenswrapper[4923]: I0321 04:33:28.531217 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/028ed822-e3eb-47f2-9f87-ba89b0f4e3a7-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96\" (UID: \"028ed822-e3eb-47f2-9f87-ba89b0f4e3a7\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96" Mar 21 04:33:28 crc kubenswrapper[4923]: I0321 04:33:28.556374 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v4ft\" (UniqueName: \"kubernetes.io/projected/028ed822-e3eb-47f2-9f87-ba89b0f4e3a7-kube-api-access-6v4ft\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96\" (UID: \"028ed822-e3eb-47f2-9f87-ba89b0f4e3a7\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96" Mar 21 04:33:28 crc kubenswrapper[4923]: I0321 04:33:28.642531 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96" Mar 21 04:33:29 crc kubenswrapper[4923]: W0321 04:33:29.154512 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod028ed822_e3eb_47f2_9f87_ba89b0f4e3a7.slice/crio-40a428e64cc787b9c56b61dfa7aa73d5ad6487edbac2030a0acfa1dc41357dbe WatchSource:0}: Error finding container 40a428e64cc787b9c56b61dfa7aa73d5ad6487edbac2030a0acfa1dc41357dbe: Status 404 returned error can't find the container with id 40a428e64cc787b9c56b61dfa7aa73d5ad6487edbac2030a0acfa1dc41357dbe Mar 21 04:33:29 crc kubenswrapper[4923]: I0321 04:33:29.156363 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96"] Mar 21 04:33:29 crc kubenswrapper[4923]: I0321 04:33:29.904534 4923 generic.go:334] "Generic (PLEG): container finished" podID="028ed822-e3eb-47f2-9f87-ba89b0f4e3a7" containerID="913c86b642b50c0db8f2db51fcdbaabe836b66b378f8b08c12323cdbde872502" exitCode=0 Mar 21 04:33:29 crc kubenswrapper[4923]: I0321 04:33:29.904595 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96" event={"ID":"028ed822-e3eb-47f2-9f87-ba89b0f4e3a7","Type":"ContainerDied","Data":"913c86b642b50c0db8f2db51fcdbaabe836b66b378f8b08c12323cdbde872502"} Mar 21 04:33:29 crc kubenswrapper[4923]: I0321 04:33:29.904633 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96" event={"ID":"028ed822-e3eb-47f2-9f87-ba89b0f4e3a7","Type":"ContainerStarted","Data":"40a428e64cc787b9c56b61dfa7aa73d5ad6487edbac2030a0acfa1dc41357dbe"} Mar 21 04:33:33 crc kubenswrapper[4923]: I0321 04:33:33.236288 4923 patch_prober.go:28] interesting pod/machine-config-daemon-cv5gr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 04:33:33 crc kubenswrapper[4923]: I0321 04:33:33.236933 4923 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 04:33:35 crc kubenswrapper[4923]: I0321 04:33:35.951882 4923 generic.go:334] "Generic (PLEG): container finished" podID="028ed822-e3eb-47f2-9f87-ba89b0f4e3a7" containerID="04c56c12c42893bbbd5f42cf72b1d50e401e494d67c3ae3c8fa72418cfab8d0d" exitCode=0 Mar 21 04:33:35 crc kubenswrapper[4923]: I0321 04:33:35.951955 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96" event={"ID":"028ed822-e3eb-47f2-9f87-ba89b0f4e3a7","Type":"ContainerDied","Data":"04c56c12c42893bbbd5f42cf72b1d50e401e494d67c3ae3c8fa72418cfab8d0d"} Mar 21 04:33:36 crc kubenswrapper[4923]: I0321 04:33:36.964262 4923 generic.go:334] "Generic (PLEG): container finished" podID="028ed822-e3eb-47f2-9f87-ba89b0f4e3a7" containerID="3ab92e81bb94d39375d057ca2450621f5934bc015470ebbc8fb8c77165bdcec3" exitCode=0 Mar 21 04:33:36 crc kubenswrapper[4923]: I0321 04:33:36.964394 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96" event={"ID":"028ed822-e3eb-47f2-9f87-ba89b0f4e3a7","Type":"ContainerDied","Data":"3ab92e81bb94d39375d057ca2450621f5934bc015470ebbc8fb8c77165bdcec3"} Mar 21 04:33:38 crc kubenswrapper[4923]: I0321 04:33:38.375058 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96" Mar 21 04:33:38 crc kubenswrapper[4923]: I0321 04:33:38.494393 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v4ft\" (UniqueName: \"kubernetes.io/projected/028ed822-e3eb-47f2-9f87-ba89b0f4e3a7-kube-api-access-6v4ft\") pod \"028ed822-e3eb-47f2-9f87-ba89b0f4e3a7\" (UID: \"028ed822-e3eb-47f2-9f87-ba89b0f4e3a7\") " Mar 21 04:33:38 crc kubenswrapper[4923]: I0321 04:33:38.494466 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/028ed822-e3eb-47f2-9f87-ba89b0f4e3a7-bundle\") pod \"028ed822-e3eb-47f2-9f87-ba89b0f4e3a7\" (UID: \"028ed822-e3eb-47f2-9f87-ba89b0f4e3a7\") " Mar 21 04:33:38 crc kubenswrapper[4923]: I0321 04:33:38.494508 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/028ed822-e3eb-47f2-9f87-ba89b0f4e3a7-util\") pod \"028ed822-e3eb-47f2-9f87-ba89b0f4e3a7\" (UID: \"028ed822-e3eb-47f2-9f87-ba89b0f4e3a7\") " Mar 21 04:33:38 crc kubenswrapper[4923]: I0321 04:33:38.501726 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/028ed822-e3eb-47f2-9f87-ba89b0f4e3a7-bundle" (OuterVolumeSpecName: "bundle") pod "028ed822-e3eb-47f2-9f87-ba89b0f4e3a7" (UID: "028ed822-e3eb-47f2-9f87-ba89b0f4e3a7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:33:38 crc kubenswrapper[4923]: I0321 04:33:38.507502 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/028ed822-e3eb-47f2-9f87-ba89b0f4e3a7-kube-api-access-6v4ft" (OuterVolumeSpecName: "kube-api-access-6v4ft") pod "028ed822-e3eb-47f2-9f87-ba89b0f4e3a7" (UID: "028ed822-e3eb-47f2-9f87-ba89b0f4e3a7"). InnerVolumeSpecName "kube-api-access-6v4ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:33:38 crc kubenswrapper[4923]: I0321 04:33:38.526626 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/028ed822-e3eb-47f2-9f87-ba89b0f4e3a7-util" (OuterVolumeSpecName: "util") pod "028ed822-e3eb-47f2-9f87-ba89b0f4e3a7" (UID: "028ed822-e3eb-47f2-9f87-ba89b0f4e3a7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:33:38 crc kubenswrapper[4923]: I0321 04:33:38.595694 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v4ft\" (UniqueName: \"kubernetes.io/projected/028ed822-e3eb-47f2-9f87-ba89b0f4e3a7-kube-api-access-6v4ft\") on node \"crc\" DevicePath \"\"" Mar 21 04:33:38 crc kubenswrapper[4923]: I0321 04:33:38.595748 4923 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/028ed822-e3eb-47f2-9f87-ba89b0f4e3a7-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 04:33:38 crc kubenswrapper[4923]: I0321 04:33:38.595761 4923 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/028ed822-e3eb-47f2-9f87-ba89b0f4e3a7-util\") on node \"crc\" DevicePath \"\"" Mar 21 04:33:38 crc kubenswrapper[4923]: I0321 04:33:38.984941 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96" event={"ID":"028ed822-e3eb-47f2-9f87-ba89b0f4e3a7","Type":"ContainerDied","Data":"40a428e64cc787b9c56b61dfa7aa73d5ad6487edbac2030a0acfa1dc41357dbe"} Mar 21 04:33:38 crc kubenswrapper[4923]: I0321 04:33:38.985229 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40a428e64cc787b9c56b61dfa7aa73d5ad6487edbac2030a0acfa1dc41357dbe" Mar 21 04:33:38 crc kubenswrapper[4923]: I0321 04:33:38.985065 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96" Mar 21 04:33:54 crc kubenswrapper[4923]: I0321 04:33:54.217214 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-dz4d5"] Mar 21 04:33:54 crc kubenswrapper[4923]: E0321 04:33:54.218156 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="028ed822-e3eb-47f2-9f87-ba89b0f4e3a7" containerName="util" Mar 21 04:33:54 crc kubenswrapper[4923]: I0321 04:33:54.218175 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="028ed822-e3eb-47f2-9f87-ba89b0f4e3a7" containerName="util" Mar 21 04:33:54 crc kubenswrapper[4923]: E0321 04:33:54.218189 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="028ed822-e3eb-47f2-9f87-ba89b0f4e3a7" containerName="extract" Mar 21 04:33:54 crc kubenswrapper[4923]: I0321 04:33:54.218199 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="028ed822-e3eb-47f2-9f87-ba89b0f4e3a7" containerName="extract" Mar 21 04:33:54 crc kubenswrapper[4923]: E0321 04:33:54.218230 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="028ed822-e3eb-47f2-9f87-ba89b0f4e3a7" containerName="pull" Mar 21 04:33:54 crc kubenswrapper[4923]: I0321 04:33:54.218244 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="028ed822-e3eb-47f2-9f87-ba89b0f4e3a7" containerName="pull" Mar 21 04:33:54 crc kubenswrapper[4923]: I0321 04:33:54.218659 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="028ed822-e3eb-47f2-9f87-ba89b0f4e3a7" containerName="extract" Mar 21 04:33:54 crc kubenswrapper[4923]: I0321 04:33:54.219238 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-dz4d5" Mar 21 04:33:54 crc kubenswrapper[4923]: I0321 04:33:54.222755 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-dockercfg-pvkwn" Mar 21 04:33:54 crc kubenswrapper[4923]: I0321 04:33:54.245028 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-dz4d5"] Mar 21 04:33:54 crc kubenswrapper[4923]: I0321 04:33:54.328648 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs82h\" (UniqueName: \"kubernetes.io/projected/c56c4706-b4d1-4fe7-bf97-7328684b55e0-kube-api-access-hs82h\") pod \"rabbitmq-cluster-operator-779fc9694b-dz4d5\" (UID: \"c56c4706-b4d1-4fe7-bf97-7328684b55e0\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-dz4d5" Mar 21 04:33:54 crc kubenswrapper[4923]: I0321 04:33:54.430410 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs82h\" (UniqueName: \"kubernetes.io/projected/c56c4706-b4d1-4fe7-bf97-7328684b55e0-kube-api-access-hs82h\") pod \"rabbitmq-cluster-operator-779fc9694b-dz4d5\" (UID: \"c56c4706-b4d1-4fe7-bf97-7328684b55e0\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-dz4d5" Mar 21 04:33:54 crc kubenswrapper[4923]: I0321 04:33:54.449367 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs82h\" (UniqueName: \"kubernetes.io/projected/c56c4706-b4d1-4fe7-bf97-7328684b55e0-kube-api-access-hs82h\") pod \"rabbitmq-cluster-operator-779fc9694b-dz4d5\" (UID: \"c56c4706-b4d1-4fe7-bf97-7328684b55e0\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-dz4d5" Mar 21 04:33:54 crc kubenswrapper[4923]: I0321 04:33:54.544003 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-dz4d5" Mar 21 04:33:55 crc kubenswrapper[4923]: I0321 04:33:55.032542 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-dz4d5"] Mar 21 04:33:55 crc kubenswrapper[4923]: I0321 04:33:55.103878 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-dz4d5" event={"ID":"c56c4706-b4d1-4fe7-bf97-7328684b55e0","Type":"ContainerStarted","Data":"be8ee70ba24ce2b6352e9ec1924d2a0d0c0665e282c69039b175c065697891ed"} Mar 21 04:33:59 crc kubenswrapper[4923]: I0321 04:33:59.131114 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-dz4d5" event={"ID":"c56c4706-b4d1-4fe7-bf97-7328684b55e0","Type":"ContainerStarted","Data":"b93db263cd3d538b113957fbfb47d1fbbef11fac7227f9be19125933edcc6afb"} Mar 21 04:33:59 crc kubenswrapper[4923]: I0321 04:33:59.145727 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-dz4d5" podStartSLOduration=1.8524787360000001 podStartE2EDuration="5.145712466s" podCreationTimestamp="2026-03-21 04:33:54 +0000 UTC" firstStartedPulling="2026-03-21 04:33:55.027812699 +0000 UTC m=+1000.180823786" lastFinishedPulling="2026-03-21 04:33:58.321046429 +0000 UTC m=+1003.474057516" observedRunningTime="2026-03-21 04:33:59.141956838 +0000 UTC m=+1004.294967935" watchObservedRunningTime="2026-03-21 04:33:59.145712466 +0000 UTC m=+1004.298723553" Mar 21 04:34:00 crc kubenswrapper[4923]: I0321 04:34:00.235875 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567794-8gk59"] Mar 21 04:34:00 crc kubenswrapper[4923]: I0321 04:34:00.239420 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567794-8gk59" Mar 21 04:34:00 crc kubenswrapper[4923]: I0321 04:34:00.241629 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567794-8gk59"] Mar 21 04:34:00 crc kubenswrapper[4923]: I0321 04:34:00.241933 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 04:34:00 crc kubenswrapper[4923]: I0321 04:34:00.242354 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 04:34:00 crc kubenswrapper[4923]: I0321 04:34:00.244389 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8trfl" Mar 21 04:34:00 crc kubenswrapper[4923]: I0321 04:34:00.323189 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jhwv\" (UniqueName: \"kubernetes.io/projected/1023ab82-a41a-4fa0-a0fb-f19cf7ed3716-kube-api-access-9jhwv\") pod \"auto-csr-approver-29567794-8gk59\" (UID: \"1023ab82-a41a-4fa0-a0fb-f19cf7ed3716\") " pod="openshift-infra/auto-csr-approver-29567794-8gk59" Mar 21 04:34:00 crc kubenswrapper[4923]: I0321 04:34:00.425688 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jhwv\" (UniqueName: \"kubernetes.io/projected/1023ab82-a41a-4fa0-a0fb-f19cf7ed3716-kube-api-access-9jhwv\") pod \"auto-csr-approver-29567794-8gk59\" (UID: \"1023ab82-a41a-4fa0-a0fb-f19cf7ed3716\") " pod="openshift-infra/auto-csr-approver-29567794-8gk59" Mar 21 04:34:00 crc kubenswrapper[4923]: I0321 04:34:00.468046 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jhwv\" (UniqueName: \"kubernetes.io/projected/1023ab82-a41a-4fa0-a0fb-f19cf7ed3716-kube-api-access-9jhwv\") pod \"auto-csr-approver-29567794-8gk59\" (UID: \"1023ab82-a41a-4fa0-a0fb-f19cf7ed3716\") " pod="openshift-infra/auto-csr-approver-29567794-8gk59" Mar 21 04:34:00 crc kubenswrapper[4923]: I0321 04:34:00.567167 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567794-8gk59" Mar 21 04:34:00 crc kubenswrapper[4923]: I0321 04:34:00.892505 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567794-8gk59"] Mar 21 04:34:01 crc kubenswrapper[4923]: I0321 04:34:01.147950 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567794-8gk59" event={"ID":"1023ab82-a41a-4fa0-a0fb-f19cf7ed3716","Type":"ContainerStarted","Data":"d22252e8cec90715c670b2acd83383a8c2b801cabed97e64a0baa2d9bf3fe3e1"} Mar 21 04:34:01 crc kubenswrapper[4923]: I0321 04:34:01.815046 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/rabbitmq-server-0"] Mar 21 04:34:01 crc kubenswrapper[4923]: I0321 04:34:01.816255 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:34:01 crc kubenswrapper[4923]: I0321 04:34:01.818575 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"horizon-kuttl-tests"/"rabbitmq-server-conf" Mar 21 04:34:01 crc kubenswrapper[4923]: I0321 04:34:01.818766 4923 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"rabbitmq-erlang-cookie" Mar 21 04:34:01 crc kubenswrapper[4923]: I0321 04:34:01.820869 4923 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"rabbitmq-default-user" Mar 21 04:34:01 crc kubenswrapper[4923]: I0321 04:34:01.821037 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"horizon-kuttl-tests"/"rabbitmq-plugins-conf" Mar 21 04:34:01 crc kubenswrapper[4923]: I0321 04:34:01.821317 4923 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"rabbitmq-server-dockercfg-gdcwl" Mar 21 04:34:01 crc kubenswrapper[4923]: I0321 04:34:01.830233 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/rabbitmq-server-0"] Mar 21 04:34:01 crc kubenswrapper[4923]: I0321 04:34:01.948836 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9701db56-65b1-4cee-8942-69fc9cc4e7b8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:34:01 crc kubenswrapper[4923]: I0321 04:34:01.948943 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9701db56-65b1-4cee-8942-69fc9cc4e7b8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:34:01 crc kubenswrapper[4923]: I0321 04:34:01.948974 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9701db56-65b1-4cee-8942-69fc9cc4e7b8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:34:01 crc kubenswrapper[4923]: I0321 04:34:01.949003 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9701db56-65b1-4cee-8942-69fc9cc4e7b8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:34:01 crc kubenswrapper[4923]: I0321 04:34:01.949051 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c882a054-a638-41a8-95e3-fce60ed92425\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c882a054-a638-41a8-95e3-fce60ed92425\") pod \"rabbitmq-server-0\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:34:01 crc kubenswrapper[4923]: I0321 04:34:01.949084 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cz64\" (UniqueName: \"kubernetes.io/projected/9701db56-65b1-4cee-8942-69fc9cc4e7b8-kube-api-access-7cz64\") pod \"rabbitmq-server-0\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:34:01 crc kubenswrapper[4923]: I0321 04:34:01.949113 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9701db56-65b1-4cee-8942-69fc9cc4e7b8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:34:01 crc kubenswrapper[4923]: I0321 04:34:01.949149 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9701db56-65b1-4cee-8942-69fc9cc4e7b8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:34:02 crc kubenswrapper[4923]: I0321 04:34:02.049977 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9701db56-65b1-4cee-8942-69fc9cc4e7b8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:34:02 crc kubenswrapper[4923]: I0321 04:34:02.050018 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9701db56-65b1-4cee-8942-69fc9cc4e7b8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:34:02 crc kubenswrapper[4923]: I0321 04:34:02.050054 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c882a054-a638-41a8-95e3-fce60ed92425\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c882a054-a638-41a8-95e3-fce60ed92425\") pod \"rabbitmq-server-0\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:34:02 crc kubenswrapper[4923]: I0321 04:34:02.050089 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cz64\" (UniqueName: \"kubernetes.io/projected/9701db56-65b1-4cee-8942-69fc9cc4e7b8-kube-api-access-7cz64\") pod \"rabbitmq-server-0\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:34:02 crc kubenswrapper[4923]: I0321 04:34:02.050111 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9701db56-65b1-4cee-8942-69fc9cc4e7b8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:34:02 crc kubenswrapper[4923]: I0321 04:34:02.050129 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9701db56-65b1-4cee-8942-69fc9cc4e7b8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:34:02 crc kubenswrapper[4923]: I0321 04:34:02.050155 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9701db56-65b1-4cee-8942-69fc9cc4e7b8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:34:02 crc kubenswrapper[4923]: I0321 04:34:02.050204 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9701db56-65b1-4cee-8942-69fc9cc4e7b8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:34:02 crc kubenswrapper[4923]: I0321 04:34:02.050657 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9701db56-65b1-4cee-8942-69fc9cc4e7b8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:34:02 crc kubenswrapper[4923]: I0321 04:34:02.050896 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9701db56-65b1-4cee-8942-69fc9cc4e7b8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:34:02 crc kubenswrapper[4923]: I0321 04:34:02.051799 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9701db56-65b1-4cee-8942-69fc9cc4e7b8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:34:02 crc kubenswrapper[4923]: I0321 04:34:02.055267 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9701db56-65b1-4cee-8942-69fc9cc4e7b8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:34:02 crc kubenswrapper[4923]: I0321 04:34:02.067857 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9701db56-65b1-4cee-8942-69fc9cc4e7b8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:34:02 crc kubenswrapper[4923]: I0321 04:34:02.069976 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cz64\" (UniqueName: \"kubernetes.io/projected/9701db56-65b1-4cee-8942-69fc9cc4e7b8-kube-api-access-7cz64\") pod \"rabbitmq-server-0\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:34:02 crc kubenswrapper[4923]: I0321 04:34:02.070225 4923 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 21 04:34:02 crc kubenswrapper[4923]: I0321 04:34:02.070261 4923 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c882a054-a638-41a8-95e3-fce60ed92425\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c882a054-a638-41a8-95e3-fce60ed92425\") pod \"rabbitmq-server-0\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a9deb5dda91634bb8eb60c452f0924142b10e4ebbc56150124ce85f1e518a9e6/globalmount\"" pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:34:02 crc kubenswrapper[4923]: I0321 04:34:02.076436 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9701db56-65b1-4cee-8942-69fc9cc4e7b8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:34:02 crc kubenswrapper[4923]: I0321 04:34:02.109050 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c882a054-a638-41a8-95e3-fce60ed92425\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c882a054-a638-41a8-95e3-fce60ed92425\") pod \"rabbitmq-server-0\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:34:02 crc kubenswrapper[4923]: I0321 04:34:02.166686 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:34:02 crc kubenswrapper[4923]: I0321 04:34:02.391178 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/rabbitmq-server-0"] Mar 21 04:34:02 crc kubenswrapper[4923]: W0321 04:34:02.398594 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9701db56_65b1_4cee_8942_69fc9cc4e7b8.slice/crio-a2734637b87ded42751ed71f031d0aa820539df79dc396c19fe2866b3e4d60ba WatchSource:0}: Error finding container a2734637b87ded42751ed71f031d0aa820539df79dc396c19fe2866b3e4d60ba: Status 404 returned error can't find the container with id a2734637b87ded42751ed71f031d0aa820539df79dc396c19fe2866b3e4d60ba Mar 21 04:34:03 crc kubenswrapper[4923]: I0321 04:34:03.163138 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/rabbitmq-server-0" event={"ID":"9701db56-65b1-4cee-8942-69fc9cc4e7b8","Type":"ContainerStarted","Data":"a2734637b87ded42751ed71f031d0aa820539df79dc396c19fe2866b3e4d60ba"} Mar 21 04:34:03 crc kubenswrapper[4923]: I0321 04:34:03.165737 4923 generic.go:334] "Generic (PLEG): container finished" podID="1023ab82-a41a-4fa0-a0fb-f19cf7ed3716" containerID="54f6207f84f560223dfe6d12a885f3ad3b474c0993c4dd3808948e8450d3d518" exitCode=0 Mar 21 04:34:03 crc kubenswrapper[4923]: I0321 04:34:03.165804 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567794-8gk59" event={"ID":"1023ab82-a41a-4fa0-a0fb-f19cf7ed3716","Type":"ContainerDied","Data":"54f6207f84f560223dfe6d12a885f3ad3b474c0993c4dd3808948e8450d3d518"} Mar 21 04:34:03 crc kubenswrapper[4923]: I0321 04:34:03.235715 4923 patch_prober.go:28] interesting pod/machine-config-daemon-cv5gr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 04:34:03 crc kubenswrapper[4923]: I0321 04:34:03.235834 4923 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 04:34:03 crc kubenswrapper[4923]: I0321 04:34:03.235901 4923 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" Mar 21 04:34:03 crc kubenswrapper[4923]: I0321 04:34:03.236952 4923 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"861e5e7c19712fc1c95009bcdddeab790be4423ba276affe7ecbcb3c0afdf835"} pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 21 04:34:03 crc kubenswrapper[4923]: I0321 04:34:03.237099 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" containerID="cri-o://861e5e7c19712fc1c95009bcdddeab790be4423ba276affe7ecbcb3c0afdf835" gracePeriod=600 Mar 21 04:34:03 crc kubenswrapper[4923]: I0321 04:34:03.447688 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-index-jqj57"] Mar 21 04:34:03 crc kubenswrapper[4923]: I0321 04:34:03.448871 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-jqj57" Mar 21 04:34:03 crc kubenswrapper[4923]: I0321 04:34:03.454682 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-index-dockercfg-bbfcj" Mar 21 04:34:03 crc kubenswrapper[4923]: I0321 04:34:03.466158 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-jqj57"] Mar 21 04:34:03 crc kubenswrapper[4923]: I0321 04:34:03.573901 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzg6r\" (UniqueName: \"kubernetes.io/projected/862bd87a-5130-41e2-a883-ac803db8df3b-kube-api-access-wzg6r\") pod \"keystone-operator-index-jqj57\" (UID: \"862bd87a-5130-41e2-a883-ac803db8df3b\") " pod="openstack-operators/keystone-operator-index-jqj57" Mar 21 04:34:03 crc kubenswrapper[4923]: I0321 04:34:03.675826 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzg6r\" (UniqueName: \"kubernetes.io/projected/862bd87a-5130-41e2-a883-ac803db8df3b-kube-api-access-wzg6r\") pod \"keystone-operator-index-jqj57\" (UID: \"862bd87a-5130-41e2-a883-ac803db8df3b\") " pod="openstack-operators/keystone-operator-index-jqj57" Mar 21 04:34:03 crc kubenswrapper[4923]: I0321 04:34:03.707314 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzg6r\" (UniqueName: \"kubernetes.io/projected/862bd87a-5130-41e2-a883-ac803db8df3b-kube-api-access-wzg6r\") pod \"keystone-operator-index-jqj57\" (UID: \"862bd87a-5130-41e2-a883-ac803db8df3b\") " pod="openstack-operators/keystone-operator-index-jqj57" Mar 21 04:34:03 crc kubenswrapper[4923]: I0321 04:34:03.800996 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-jqj57" Mar 21 04:34:04 crc kubenswrapper[4923]: I0321 04:34:04.183295 4923 generic.go:334] "Generic (PLEG): container finished" podID="34cdf206-b121-415c-ae40-21245192e724" containerID="861e5e7c19712fc1c95009bcdddeab790be4423ba276affe7ecbcb3c0afdf835" exitCode=0 Mar 21 04:34:04 crc kubenswrapper[4923]: I0321 04:34:04.183418 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" event={"ID":"34cdf206-b121-415c-ae40-21245192e724","Type":"ContainerDied","Data":"861e5e7c19712fc1c95009bcdddeab790be4423ba276affe7ecbcb3c0afdf835"} Mar 21 04:34:04 crc kubenswrapper[4923]: I0321 04:34:04.183801 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" event={"ID":"34cdf206-b121-415c-ae40-21245192e724","Type":"ContainerStarted","Data":"ed57872de3a4ec46ca622220eae1ec6afcee8d851a5ca3d72e36b3f3c20665be"} Mar 21 04:34:04 crc kubenswrapper[4923]: I0321 04:34:04.183830 4923 scope.go:117] "RemoveContainer" containerID="9ca9400ca4c664dfef2280af9aecd52b539e59a9f840922ab22c0e27838ee22c" Mar 21 04:34:04 crc kubenswrapper[4923]: I0321 04:34:04.243833 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-jqj57"] Mar 21 04:34:04 crc kubenswrapper[4923]: W0321 04:34:04.253751 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod862bd87a_5130_41e2_a883_ac803db8df3b.slice/crio-aa47fdadd385964b8feed7e563cc9354c833e2543397ea60151d38f8d6999e0a WatchSource:0}: Error finding container aa47fdadd385964b8feed7e563cc9354c833e2543397ea60151d38f8d6999e0a: Status 404 returned error can't find the container with id aa47fdadd385964b8feed7e563cc9354c833e2543397ea60151d38f8d6999e0a Mar 21 04:34:04 crc kubenswrapper[4923]: I0321 04:34:04.438022 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567794-8gk59" Mar 21 04:34:04 crc kubenswrapper[4923]: I0321 04:34:04.587192 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jhwv\" (UniqueName: \"kubernetes.io/projected/1023ab82-a41a-4fa0-a0fb-f19cf7ed3716-kube-api-access-9jhwv\") pod \"1023ab82-a41a-4fa0-a0fb-f19cf7ed3716\" (UID: \"1023ab82-a41a-4fa0-a0fb-f19cf7ed3716\") " Mar 21 04:34:04 crc kubenswrapper[4923]: I0321 04:34:04.592940 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1023ab82-a41a-4fa0-a0fb-f19cf7ed3716-kube-api-access-9jhwv" (OuterVolumeSpecName: "kube-api-access-9jhwv") pod "1023ab82-a41a-4fa0-a0fb-f19cf7ed3716" (UID: "1023ab82-a41a-4fa0-a0fb-f19cf7ed3716"). InnerVolumeSpecName "kube-api-access-9jhwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:34:04 crc kubenswrapper[4923]: I0321 04:34:04.688624 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jhwv\" (UniqueName: \"kubernetes.io/projected/1023ab82-a41a-4fa0-a0fb-f19cf7ed3716-kube-api-access-9jhwv\") on node \"crc\" DevicePath \"\"" Mar 21 04:34:05 crc kubenswrapper[4923]: I0321 04:34:05.193352 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567794-8gk59" Mar 21 04:34:05 crc kubenswrapper[4923]: I0321 04:34:05.193425 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567794-8gk59" event={"ID":"1023ab82-a41a-4fa0-a0fb-f19cf7ed3716","Type":"ContainerDied","Data":"d22252e8cec90715c670b2acd83383a8c2b801cabed97e64a0baa2d9bf3fe3e1"} Mar 21 04:34:05 crc kubenswrapper[4923]: I0321 04:34:05.193467 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d22252e8cec90715c670b2acd83383a8c2b801cabed97e64a0baa2d9bf3fe3e1" Mar 21 04:34:05 crc kubenswrapper[4923]: I0321 04:34:05.199034 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-jqj57" event={"ID":"862bd87a-5130-41e2-a883-ac803db8df3b","Type":"ContainerStarted","Data":"aa47fdadd385964b8feed7e563cc9354c833e2543397ea60151d38f8d6999e0a"} Mar 21 04:34:05 crc kubenswrapper[4923]: I0321 04:34:05.491116 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567788-6ztlt"] Mar 21 04:34:05 crc kubenswrapper[4923]: I0321 04:34:05.495148 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567788-6ztlt"] Mar 21 04:34:06 crc kubenswrapper[4923]: I0321 04:34:06.382147 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="208f0755-d10b-4b07-a191-dd2f2417f635" path="/var/lib/kubelet/pods/208f0755-d10b-4b07-a191-dd2f2417f635/volumes" Mar 21 04:34:09 crc kubenswrapper[4923]: I0321 04:34:09.236258 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-jqj57" event={"ID":"862bd87a-5130-41e2-a883-ac803db8df3b","Type":"ContainerStarted","Data":"21cd675804de07353a74fcc3531873c392f53da51530f4fef13ecdd4abf8da25"} Mar 21 04:34:09 crc kubenswrapper[4923]: I0321 04:34:09.252698 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-index-jqj57" podStartSLOduration=2.187422651 podStartE2EDuration="6.252675691s" podCreationTimestamp="2026-03-21 04:34:03 +0000 UTC" firstStartedPulling="2026-03-21 04:34:04.261311019 +0000 UTC m=+1009.414322106" lastFinishedPulling="2026-03-21 04:34:08.326564049 +0000 UTC m=+1013.479575146" observedRunningTime="2026-03-21 04:34:09.25124553 +0000 UTC m=+1014.404256627" watchObservedRunningTime="2026-03-21 04:34:09.252675691 +0000 UTC m=+1014.405686818" Mar 21 04:34:11 crc kubenswrapper[4923]: I0321 04:34:11.258090 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/rabbitmq-server-0" event={"ID":"9701db56-65b1-4cee-8942-69fc9cc4e7b8","Type":"ContainerStarted","Data":"7b2ddaee519096a847bbb851a5e745ce3f8275993edc65df7976bdad952d6bc9"} Mar 21 04:34:13 crc kubenswrapper[4923]: I0321 04:34:13.802356 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/keystone-operator-index-jqj57" Mar 21 04:34:13 crc kubenswrapper[4923]: I0321 04:34:13.802731 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-index-jqj57" Mar 21 04:34:13 crc kubenswrapper[4923]: I0321 04:34:13.830238 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/keystone-operator-index-jqj57" Mar 21 04:34:14 crc kubenswrapper[4923]: I0321 04:34:14.305208 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-index-jqj57" Mar 21 04:34:17 crc kubenswrapper[4923]: I0321 04:34:17.371044 4923 scope.go:117] "RemoveContainer" containerID="f5096b59ebf59fa8e817a02949451bfa1788bc19ac1a6491917b9c93eff61574" Mar 21 04:34:22 crc kubenswrapper[4923]: I0321 04:34:22.498900 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m"] Mar 21 04:34:22 crc kubenswrapper[4923]: E0321 04:34:22.499738 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1023ab82-a41a-4fa0-a0fb-f19cf7ed3716" containerName="oc" Mar 21 04:34:22 crc kubenswrapper[4923]: I0321 04:34:22.499773 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="1023ab82-a41a-4fa0-a0fb-f19cf7ed3716" containerName="oc" Mar 21 04:34:22 crc kubenswrapper[4923]: I0321 04:34:22.499944 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="1023ab82-a41a-4fa0-a0fb-f19cf7ed3716" containerName="oc" Mar 21 04:34:22 crc kubenswrapper[4923]: I0321 04:34:22.500955 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m" Mar 21 04:34:22 crc kubenswrapper[4923]: I0321 04:34:22.503798 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-vcb7s" Mar 21 04:34:22 crc kubenswrapper[4923]: I0321 04:34:22.554170 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m"] Mar 21 04:34:22 crc kubenswrapper[4923]: I0321 04:34:22.559221 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fncz2\" (UniqueName: \"kubernetes.io/projected/58877c28-2cb2-4659-9405-9242036b8a98-kube-api-access-fncz2\") pod \"25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m\" (UID: \"58877c28-2cb2-4659-9405-9242036b8a98\") " pod="openstack-operators/25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m" Mar 21 04:34:22 crc kubenswrapper[4923]: I0321 04:34:22.559284 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/58877c28-2cb2-4659-9405-9242036b8a98-util\") pod \"25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m\" (UID: \"58877c28-2cb2-4659-9405-9242036b8a98\") " pod="openstack-operators/25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m" Mar 21 04:34:22 crc kubenswrapper[4923]: I0321 04:34:22.559407 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/58877c28-2cb2-4659-9405-9242036b8a98-bundle\") pod \"25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m\" (UID: \"58877c28-2cb2-4659-9405-9242036b8a98\") " pod="openstack-operators/25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m" Mar 21 04:34:22 crc kubenswrapper[4923]: I0321 04:34:22.660883 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fncz2\" (UniqueName: \"kubernetes.io/projected/58877c28-2cb2-4659-9405-9242036b8a98-kube-api-access-fncz2\") pod \"25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m\" (UID: \"58877c28-2cb2-4659-9405-9242036b8a98\") " pod="openstack-operators/25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m" Mar 21 04:34:22 crc kubenswrapper[4923]: I0321 04:34:22.660952 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/58877c28-2cb2-4659-9405-9242036b8a98-util\") pod \"25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m\" (UID: \"58877c28-2cb2-4659-9405-9242036b8a98\") " pod="openstack-operators/25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m" Mar 21 04:34:22 crc kubenswrapper[4923]: I0321 04:34:22.660991 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/58877c28-2cb2-4659-9405-9242036b8a98-bundle\") pod \"25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m\" (UID: \"58877c28-2cb2-4659-9405-9242036b8a98\") " pod="openstack-operators/25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m" Mar 21 04:34:22 crc kubenswrapper[4923]: I0321 04:34:22.661536 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/58877c28-2cb2-4659-9405-9242036b8a98-bundle\") pod \"25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m\" (UID: \"58877c28-2cb2-4659-9405-9242036b8a98\") " pod="openstack-operators/25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m" Mar 21 04:34:22 crc kubenswrapper[4923]: I0321 04:34:22.661609 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/58877c28-2cb2-4659-9405-9242036b8a98-util\") pod \"25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m\" (UID: \"58877c28-2cb2-4659-9405-9242036b8a98\") " pod="openstack-operators/25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m" Mar 21 04:34:22 crc kubenswrapper[4923]: I0321 04:34:22.680086 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fncz2\" (UniqueName: \"kubernetes.io/projected/58877c28-2cb2-4659-9405-9242036b8a98-kube-api-access-fncz2\") pod \"25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m\" (UID: \"58877c28-2cb2-4659-9405-9242036b8a98\") " pod="openstack-operators/25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m" Mar 21 04:34:22 crc kubenswrapper[4923]: I0321 04:34:22.819498 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m" Mar 21 04:34:23 crc kubenswrapper[4923]: I0321 04:34:23.054883 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m"] Mar 21 04:34:23 crc kubenswrapper[4923]: W0321 04:34:23.064720 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58877c28_2cb2_4659_9405_9242036b8a98.slice/crio-a0bdc98c1e9baedea0a6991ea3ba94ee5f163a983ab32b2ba5c717c5f98437d2 WatchSource:0}: Error finding container a0bdc98c1e9baedea0a6991ea3ba94ee5f163a983ab32b2ba5c717c5f98437d2: Status 404 returned error can't find the container with id a0bdc98c1e9baedea0a6991ea3ba94ee5f163a983ab32b2ba5c717c5f98437d2 Mar 21 04:34:23 crc kubenswrapper[4923]: I0321 04:34:23.353500 4923 generic.go:334] "Generic (PLEG): container finished" podID="58877c28-2cb2-4659-9405-9242036b8a98" containerID="0fe513b559ae0237ab405a0142663169cdf85eaf9192b5b44141e5d58391d49f" exitCode=0 Mar 21 04:34:23 crc kubenswrapper[4923]: I0321 04:34:23.353547 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m" event={"ID":"58877c28-2cb2-4659-9405-9242036b8a98","Type":"ContainerDied","Data":"0fe513b559ae0237ab405a0142663169cdf85eaf9192b5b44141e5d58391d49f"} Mar 21 04:34:23 crc kubenswrapper[4923]: I0321 04:34:23.353579 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m" event={"ID":"58877c28-2cb2-4659-9405-9242036b8a98","Type":"ContainerStarted","Data":"a0bdc98c1e9baedea0a6991ea3ba94ee5f163a983ab32b2ba5c717c5f98437d2"} Mar 21 04:34:24 crc kubenswrapper[4923]: I0321 04:34:24.367818 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m" event={"ID":"58877c28-2cb2-4659-9405-9242036b8a98","Type":"ContainerStarted","Data":"291175e0c172462298c9695aac42064dfab9d8c34c6ae1dec9a3a4019fbfb480"} Mar 21 04:34:25 crc kubenswrapper[4923]: I0321 04:34:25.377036 4923 generic.go:334] "Generic (PLEG): container finished" podID="58877c28-2cb2-4659-9405-9242036b8a98" containerID="291175e0c172462298c9695aac42064dfab9d8c34c6ae1dec9a3a4019fbfb480" exitCode=0 Mar 21 04:34:25 crc kubenswrapper[4923]: I0321 04:34:25.377089 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m" event={"ID":"58877c28-2cb2-4659-9405-9242036b8a98","Type":"ContainerDied","Data":"291175e0c172462298c9695aac42064dfab9d8c34c6ae1dec9a3a4019fbfb480"} Mar 21 04:34:26 crc kubenswrapper[4923]: I0321 04:34:26.388343 4923 generic.go:334] "Generic (PLEG): container finished" podID="58877c28-2cb2-4659-9405-9242036b8a98" containerID="6cdb7fd1cbb173c99bd888a9db792f4f2cb6a03e23fa0907b8d62c37f2f939c6" exitCode=0 Mar 21 04:34:26 crc kubenswrapper[4923]: I0321 04:34:26.388386 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m" event={"ID":"58877c28-2cb2-4659-9405-9242036b8a98","Type":"ContainerDied","Data":"6cdb7fd1cbb173c99bd888a9db792f4f2cb6a03e23fa0907b8d62c37f2f939c6"} Mar 21 04:34:27 crc kubenswrapper[4923]: I0321 04:34:27.680726 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m" Mar 21 04:34:27 crc kubenswrapper[4923]: I0321 04:34:27.840980 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/58877c28-2cb2-4659-9405-9242036b8a98-bundle\") pod \"58877c28-2cb2-4659-9405-9242036b8a98\" (UID: \"58877c28-2cb2-4659-9405-9242036b8a98\") " Mar 21 04:34:27 crc kubenswrapper[4923]: I0321 04:34:27.841100 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/58877c28-2cb2-4659-9405-9242036b8a98-util\") pod \"58877c28-2cb2-4659-9405-9242036b8a98\" (UID: \"58877c28-2cb2-4659-9405-9242036b8a98\") " Mar 21 04:34:27 crc kubenswrapper[4923]: I0321 04:34:27.841866 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fncz2\" (UniqueName: \"kubernetes.io/projected/58877c28-2cb2-4659-9405-9242036b8a98-kube-api-access-fncz2\") pod \"58877c28-2cb2-4659-9405-9242036b8a98\" (UID: \"58877c28-2cb2-4659-9405-9242036b8a98\") " Mar 21 04:34:27 crc kubenswrapper[4923]: I0321 04:34:27.842297 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58877c28-2cb2-4659-9405-9242036b8a98-bundle" (OuterVolumeSpecName: "bundle") pod "58877c28-2cb2-4659-9405-9242036b8a98" (UID: "58877c28-2cb2-4659-9405-9242036b8a98"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:34:27 crc kubenswrapper[4923]: I0321 04:34:27.842681 4923 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/58877c28-2cb2-4659-9405-9242036b8a98-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 04:34:27 crc kubenswrapper[4923]: I0321 04:34:27.850733 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58877c28-2cb2-4659-9405-9242036b8a98-kube-api-access-fncz2" (OuterVolumeSpecName: "kube-api-access-fncz2") pod "58877c28-2cb2-4659-9405-9242036b8a98" (UID: "58877c28-2cb2-4659-9405-9242036b8a98"). InnerVolumeSpecName "kube-api-access-fncz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:34:27 crc kubenswrapper[4923]: I0321 04:34:27.862618 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58877c28-2cb2-4659-9405-9242036b8a98-util" (OuterVolumeSpecName: "util") pod "58877c28-2cb2-4659-9405-9242036b8a98" (UID: "58877c28-2cb2-4659-9405-9242036b8a98"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:34:27 crc kubenswrapper[4923]: I0321 04:34:27.944033 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fncz2\" (UniqueName: \"kubernetes.io/projected/58877c28-2cb2-4659-9405-9242036b8a98-kube-api-access-fncz2\") on node \"crc\" DevicePath \"\"" Mar 21 04:34:27 crc kubenswrapper[4923]: I0321 04:34:27.944560 4923 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/58877c28-2cb2-4659-9405-9242036b8a98-util\") on node \"crc\" DevicePath \"\"" Mar 21 04:34:28 crc kubenswrapper[4923]: I0321 04:34:28.414499 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m" event={"ID":"58877c28-2cb2-4659-9405-9242036b8a98","Type":"ContainerDied","Data":"a0bdc98c1e9baedea0a6991ea3ba94ee5f163a983ab32b2ba5c717c5f98437d2"} Mar 21 04:34:28 crc kubenswrapper[4923]: I0321 04:34:28.414591 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0bdc98c1e9baedea0a6991ea3ba94ee5f163a983ab32b2ba5c717c5f98437d2" Mar 21 04:34:28 crc kubenswrapper[4923]: I0321 04:34:28.414757 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m" Mar 21 04:34:40 crc kubenswrapper[4923]: I0321 04:34:40.415856 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-77c5d8b87c-gmbv2"] Mar 21 04:34:40 crc kubenswrapper[4923]: E0321 04:34:40.416540 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58877c28-2cb2-4659-9405-9242036b8a98" containerName="extract" Mar 21 04:34:40 crc kubenswrapper[4923]: I0321 04:34:40.416553 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="58877c28-2cb2-4659-9405-9242036b8a98" containerName="extract" Mar 21 04:34:40 crc kubenswrapper[4923]: E0321 04:34:40.416561 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58877c28-2cb2-4659-9405-9242036b8a98" containerName="util" Mar 21 04:34:40 crc kubenswrapper[4923]: I0321 04:34:40.416567 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="58877c28-2cb2-4659-9405-9242036b8a98" containerName="util" Mar 21 04:34:40 crc kubenswrapper[4923]: E0321 04:34:40.416580 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58877c28-2cb2-4659-9405-9242036b8a98" containerName="pull" Mar 21 04:34:40 crc kubenswrapper[4923]: I0321 04:34:40.416585 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="58877c28-2cb2-4659-9405-9242036b8a98" containerName="pull" Mar 21 04:34:40 crc kubenswrapper[4923]: I0321 04:34:40.416693 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="58877c28-2cb2-4659-9405-9242036b8a98" containerName="extract" Mar 21 04:34:40 crc kubenswrapper[4923]: I0321 04:34:40.417084 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-77c5d8b87c-gmbv2" Mar 21 04:34:40 crc kubenswrapper[4923]: I0321 04:34:40.418953 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-service-cert" Mar 21 04:34:40 crc kubenswrapper[4923]: I0321 04:34:40.419700 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-96lw2" Mar 21 04:34:40 crc kubenswrapper[4923]: I0321 04:34:40.430547 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-77c5d8b87c-gmbv2"] Mar 21 04:34:40 crc kubenswrapper[4923]: I0321 04:34:40.525135 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9be365be-2d90-4b60-88e4-db66f0d6192f-webhook-cert\") pod \"keystone-operator-controller-manager-77c5d8b87c-gmbv2\" (UID: \"9be365be-2d90-4b60-88e4-db66f0d6192f\") " pod="openstack-operators/keystone-operator-controller-manager-77c5d8b87c-gmbv2" Mar 21 04:34:40 crc kubenswrapper[4923]: I0321 04:34:40.525210 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9be365be-2d90-4b60-88e4-db66f0d6192f-apiservice-cert\") pod \"keystone-operator-controller-manager-77c5d8b87c-gmbv2\" (UID: \"9be365be-2d90-4b60-88e4-db66f0d6192f\") " pod="openstack-operators/keystone-operator-controller-manager-77c5d8b87c-gmbv2" Mar 21 04:34:40 crc kubenswrapper[4923]: I0321 04:34:40.525255 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbdjd\" (UniqueName: \"kubernetes.io/projected/9be365be-2d90-4b60-88e4-db66f0d6192f-kube-api-access-lbdjd\") pod \"keystone-operator-controller-manager-77c5d8b87c-gmbv2\" (UID: \"9be365be-2d90-4b60-88e4-db66f0d6192f\") " pod="openstack-operators/keystone-operator-controller-manager-77c5d8b87c-gmbv2" Mar 21 04:34:40 crc kubenswrapper[4923]: I0321 04:34:40.626775 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9be365be-2d90-4b60-88e4-db66f0d6192f-apiservice-cert\") pod \"keystone-operator-controller-manager-77c5d8b87c-gmbv2\" (UID: \"9be365be-2d90-4b60-88e4-db66f0d6192f\") " pod="openstack-operators/keystone-operator-controller-manager-77c5d8b87c-gmbv2" Mar 21 04:34:40 crc kubenswrapper[4923]: I0321 04:34:40.626832 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbdjd\" (UniqueName: \"kubernetes.io/projected/9be365be-2d90-4b60-88e4-db66f0d6192f-kube-api-access-lbdjd\") pod \"keystone-operator-controller-manager-77c5d8b87c-gmbv2\" (UID: \"9be365be-2d90-4b60-88e4-db66f0d6192f\") " pod="openstack-operators/keystone-operator-controller-manager-77c5d8b87c-gmbv2" Mar 21 04:34:40 crc kubenswrapper[4923]: I0321 04:34:40.626881 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9be365be-2d90-4b60-88e4-db66f0d6192f-webhook-cert\") pod \"keystone-operator-controller-manager-77c5d8b87c-gmbv2\" (UID: \"9be365be-2d90-4b60-88e4-db66f0d6192f\") " pod="openstack-operators/keystone-operator-controller-manager-77c5d8b87c-gmbv2" Mar 21 04:34:40 crc kubenswrapper[4923]: I0321 04:34:40.632673 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9be365be-2d90-4b60-88e4-db66f0d6192f-webhook-cert\") pod \"keystone-operator-controller-manager-77c5d8b87c-gmbv2\" (UID: \"9be365be-2d90-4b60-88e4-db66f0d6192f\") " pod="openstack-operators/keystone-operator-controller-manager-77c5d8b87c-gmbv2" Mar 21 04:34:40 crc kubenswrapper[4923]: I0321 04:34:40.637101 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9be365be-2d90-4b60-88e4-db66f0d6192f-apiservice-cert\") pod \"keystone-operator-controller-manager-77c5d8b87c-gmbv2\" (UID: \"9be365be-2d90-4b60-88e4-db66f0d6192f\") " pod="openstack-operators/keystone-operator-controller-manager-77c5d8b87c-gmbv2" Mar 21 04:34:40 crc kubenswrapper[4923]: I0321 04:34:40.646345 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbdjd\" (UniqueName: \"kubernetes.io/projected/9be365be-2d90-4b60-88e4-db66f0d6192f-kube-api-access-lbdjd\") pod \"keystone-operator-controller-manager-77c5d8b87c-gmbv2\" (UID: \"9be365be-2d90-4b60-88e4-db66f0d6192f\") " pod="openstack-operators/keystone-operator-controller-manager-77c5d8b87c-gmbv2" Mar 21 04:34:40 crc kubenswrapper[4923]: I0321 04:34:40.735872 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-77c5d8b87c-gmbv2" Mar 21 04:34:41 crc kubenswrapper[4923]: I0321 04:34:41.014188 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-77c5d8b87c-gmbv2"] Mar 21 04:34:41 crc kubenswrapper[4923]: W0321 04:34:41.038458 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9be365be_2d90_4b60_88e4_db66f0d6192f.slice/crio-c5c4925cae33c430e0ad84f0237eb6cd72799c6db52cc22ec0d5542bccf02ce9 WatchSource:0}: Error finding container c5c4925cae33c430e0ad84f0237eb6cd72799c6db52cc22ec0d5542bccf02ce9: Status 404 returned error can't find the container with id c5c4925cae33c430e0ad84f0237eb6cd72799c6db52cc22ec0d5542bccf02ce9 Mar 21 04:34:41 crc kubenswrapper[4923]: I0321 04:34:41.575672 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-77c5d8b87c-gmbv2" event={"ID":"9be365be-2d90-4b60-88e4-db66f0d6192f","Type":"ContainerStarted","Data":"c5c4925cae33c430e0ad84f0237eb6cd72799c6db52cc22ec0d5542bccf02ce9"} Mar 21 04:34:43 crc kubenswrapper[4923]: I0321 04:34:43.588875 4923 generic.go:334] "Generic (PLEG): container finished" podID="9701db56-65b1-4cee-8942-69fc9cc4e7b8" containerID="7b2ddaee519096a847bbb851a5e745ce3f8275993edc65df7976bdad952d6bc9" exitCode=0 Mar 21 04:34:43 crc kubenswrapper[4923]: I0321 04:34:43.588973 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/rabbitmq-server-0" event={"ID":"9701db56-65b1-4cee-8942-69fc9cc4e7b8","Type":"ContainerDied","Data":"7b2ddaee519096a847bbb851a5e745ce3f8275993edc65df7976bdad952d6bc9"} Mar 21 04:34:44 crc kubenswrapper[4923]: I0321 04:34:44.596200 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/rabbitmq-server-0" event={"ID":"9701db56-65b1-4cee-8942-69fc9cc4e7b8","Type":"ContainerStarted","Data":"ddc219a38dcfdaa6c3463b79bf77a10620190660b9d64e171b34ced1fe9f8fcd"} Mar 21 04:34:44 crc kubenswrapper[4923]: I0321 04:34:44.596695 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:34:44 crc kubenswrapper[4923]: I0321 04:34:44.598384 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-77c5d8b87c-gmbv2" event={"ID":"9be365be-2d90-4b60-88e4-db66f0d6192f","Type":"ContainerStarted","Data":"b7823f2b4eb44cdd06496e24346109dd9ea5837150249eff066e26245951f0b3"} Mar 21 04:34:44 crc kubenswrapper[4923]: I0321 04:34:44.598520 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-77c5d8b87c-gmbv2" Mar 21 04:34:44 crc kubenswrapper[4923]: I0321 04:34:44.612916 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/rabbitmq-server-0" podStartSLOduration=37.904750714 podStartE2EDuration="44.612895573s" podCreationTimestamp="2026-03-21 04:34:00 +0000 UTC" firstStartedPulling="2026-03-21 04:34:02.401398195 +0000 UTC m=+1007.554409282" lastFinishedPulling="2026-03-21 04:34:09.109543014 +0000 UTC m=+1014.262554141" observedRunningTime="2026-03-21 04:34:44.61209889 +0000 UTC m=+1049.765109977" watchObservedRunningTime="2026-03-21 04:34:44.612895573 +0000 UTC m=+1049.765906660" Mar 21 04:34:44 crc kubenswrapper[4923]: I0321 04:34:44.634541 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-77c5d8b87c-gmbv2" podStartSLOduration=1.409119585 podStartE2EDuration="4.634522279s" podCreationTimestamp="2026-03-21 04:34:40 +0000 UTC" firstStartedPulling="2026-03-21 04:34:41.042970545 +0000 UTC m=+1046.195981652" lastFinishedPulling="2026-03-21 04:34:44.268373259 +0000 UTC m=+1049.421384346" observedRunningTime="2026-03-21 04:34:44.632394687 +0000 UTC m=+1049.785405784" watchObservedRunningTime="2026-03-21 04:34:44.634522279 +0000 UTC m=+1049.787533366" Mar 21 04:34:50 crc kubenswrapper[4923]: I0321 04:34:50.741117 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-77c5d8b87c-gmbv2" Mar 21 04:34:55 crc kubenswrapper[4923]: I0321 04:34:55.562772 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/keystone-db-create-rmgbw"] Mar 21 04:34:55 crc kubenswrapper[4923]: I0321 04:34:55.564949 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-db-create-rmgbw" Mar 21 04:34:55 crc kubenswrapper[4923]: I0321 04:34:55.571858 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/keystone-db-create-rmgbw"] Mar 21 04:34:55 crc kubenswrapper[4923]: I0321 04:34:55.576218 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/keystone-12c3-account-create-update-s6f52"] Mar 21 04:34:55 crc kubenswrapper[4923]: I0321 04:34:55.576921 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-12c3-account-create-update-s6f52" Mar 21 04:34:55 crc kubenswrapper[4923]: I0321 04:34:55.579100 4923 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"keystone-db-secret" Mar 21 04:34:55 crc kubenswrapper[4923]: I0321 04:34:55.631779 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/keystone-12c3-account-create-update-s6f52"] Mar 21 04:34:55 crc kubenswrapper[4923]: I0321 04:34:55.659935 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx247\" (UniqueName: \"kubernetes.io/projected/e2a7b6c7-0d55-48b8-b305-9bde8cc5181f-kube-api-access-fx247\") pod \"keystone-12c3-account-create-update-s6f52\" (UID: \"e2a7b6c7-0d55-48b8-b305-9bde8cc5181f\") " pod="horizon-kuttl-tests/keystone-12c3-account-create-update-s6f52" Mar 21 04:34:55 crc kubenswrapper[4923]: I0321 04:34:55.660189 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2q8v\" (UniqueName: \"kubernetes.io/projected/ad250f90-bc88-40f3-9795-a800d8ac3af0-kube-api-access-w2q8v\") pod \"keystone-db-create-rmgbw\" (UID: \"ad250f90-bc88-40f3-9795-a800d8ac3af0\") " pod="horizon-kuttl-tests/keystone-db-create-rmgbw" Mar 21 04:34:55 crc kubenswrapper[4923]: I0321 04:34:55.660278 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2a7b6c7-0d55-48b8-b305-9bde8cc5181f-operator-scripts\") pod \"keystone-12c3-account-create-update-s6f52\" (UID: \"e2a7b6c7-0d55-48b8-b305-9bde8cc5181f\") " pod="horizon-kuttl-tests/keystone-12c3-account-create-update-s6f52" Mar 21 04:34:55 crc kubenswrapper[4923]: I0321 04:34:55.660388 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad250f90-bc88-40f3-9795-a800d8ac3af0-operator-scripts\") pod \"keystone-db-create-rmgbw\" (UID: \"ad250f90-bc88-40f3-9795-a800d8ac3af0\") " pod="horizon-kuttl-tests/keystone-db-create-rmgbw" Mar 21 04:34:55 crc kubenswrapper[4923]: I0321 04:34:55.761759 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx247\" (UniqueName: \"kubernetes.io/projected/e2a7b6c7-0d55-48b8-b305-9bde8cc5181f-kube-api-access-fx247\") pod \"keystone-12c3-account-create-update-s6f52\" (UID: \"e2a7b6c7-0d55-48b8-b305-9bde8cc5181f\") " pod="horizon-kuttl-tests/keystone-12c3-account-create-update-s6f52" Mar 21 04:34:55 crc kubenswrapper[4923]: I0321 04:34:55.761805 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2q8v\" (UniqueName: \"kubernetes.io/projected/ad250f90-bc88-40f3-9795-a800d8ac3af0-kube-api-access-w2q8v\") pod \"keystone-db-create-rmgbw\" (UID: \"ad250f90-bc88-40f3-9795-a800d8ac3af0\") " pod="horizon-kuttl-tests/keystone-db-create-rmgbw" Mar 21 04:34:55 crc kubenswrapper[4923]: I0321 04:34:55.761831 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2a7b6c7-0d55-48b8-b305-9bde8cc5181f-operator-scripts\") pod \"keystone-12c3-account-create-update-s6f52\" (UID: \"e2a7b6c7-0d55-48b8-b305-9bde8cc5181f\") " pod="horizon-kuttl-tests/keystone-12c3-account-create-update-s6f52" Mar 21 04:34:55 crc kubenswrapper[4923]: I0321 04:34:55.761851 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad250f90-bc88-40f3-9795-a800d8ac3af0-operator-scripts\") pod \"keystone-db-create-rmgbw\" (UID: \"ad250f90-bc88-40f3-9795-a800d8ac3af0\") " pod="horizon-kuttl-tests/keystone-db-create-rmgbw" Mar 21 04:34:55 crc kubenswrapper[4923]: I0321 04:34:55.762555 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad250f90-bc88-40f3-9795-a800d8ac3af0-operator-scripts\") pod \"keystone-db-create-rmgbw\" (UID: \"ad250f90-bc88-40f3-9795-a800d8ac3af0\") " pod="horizon-kuttl-tests/keystone-db-create-rmgbw" Mar 21 04:34:55 crc kubenswrapper[4923]: I0321 04:34:55.763240 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2a7b6c7-0d55-48b8-b305-9bde8cc5181f-operator-scripts\") pod \"keystone-12c3-account-create-update-s6f52\" (UID: \"e2a7b6c7-0d55-48b8-b305-9bde8cc5181f\") " pod="horizon-kuttl-tests/keystone-12c3-account-create-update-s6f52" Mar 21 04:34:55 crc kubenswrapper[4923]: I0321 04:34:55.789252 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2q8v\" (UniqueName: \"kubernetes.io/projected/ad250f90-bc88-40f3-9795-a800d8ac3af0-kube-api-access-w2q8v\") pod \"keystone-db-create-rmgbw\" (UID: \"ad250f90-bc88-40f3-9795-a800d8ac3af0\") " pod="horizon-kuttl-tests/keystone-db-create-rmgbw" Mar 21 04:34:55 crc kubenswrapper[4923]: I0321 04:34:55.789253 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx247\" (UniqueName: \"kubernetes.io/projected/e2a7b6c7-0d55-48b8-b305-9bde8cc5181f-kube-api-access-fx247\") pod \"keystone-12c3-account-create-update-s6f52\" (UID: \"e2a7b6c7-0d55-48b8-b305-9bde8cc5181f\") " pod="horizon-kuttl-tests/keystone-12c3-account-create-update-s6f52" Mar 21 04:34:55 crc kubenswrapper[4923]: I0321 04:34:55.892113 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-db-create-rmgbw" Mar 21 04:34:55 crc kubenswrapper[4923]: I0321 04:34:55.901728 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-12c3-account-create-update-s6f52" Mar 21 04:34:56 crc kubenswrapper[4923]: I0321 04:34:56.410091 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/keystone-12c3-account-create-update-s6f52"] Mar 21 04:34:56 crc kubenswrapper[4923]: W0321 04:34:56.414828 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2a7b6c7_0d55_48b8_b305_9bde8cc5181f.slice/crio-fb6453c3ba09ddeb2bdf2c07cf08ce4ba892f1cdd823e5e75e069bf289019f4b WatchSource:0}: Error finding container fb6453c3ba09ddeb2bdf2c07cf08ce4ba892f1cdd823e5e75e069bf289019f4b: Status 404 returned error can't find the container with id fb6453c3ba09ddeb2bdf2c07cf08ce4ba892f1cdd823e5e75e069bf289019f4b Mar 21 04:34:56 crc kubenswrapper[4923]: I0321 04:34:56.450443 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/keystone-db-create-rmgbw"] Mar 21 04:34:56 crc kubenswrapper[4923]: I0321 04:34:56.698936 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-12c3-account-create-update-s6f52" event={"ID":"e2a7b6c7-0d55-48b8-b305-9bde8cc5181f","Type":"ContainerStarted","Data":"b78f5de8b80224be97834c33098373ca3ac57f11a77b993ca92b69d9b81ac40c"} Mar 21 04:34:56 crc kubenswrapper[4923]: I0321 04:34:56.699000 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-12c3-account-create-update-s6f52" event={"ID":"e2a7b6c7-0d55-48b8-b305-9bde8cc5181f","Type":"ContainerStarted","Data":"fb6453c3ba09ddeb2bdf2c07cf08ce4ba892f1cdd823e5e75e069bf289019f4b"} Mar 21 04:34:56 crc kubenswrapper[4923]: I0321 04:34:56.702731 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-db-create-rmgbw" event={"ID":"ad250f90-bc88-40f3-9795-a800d8ac3af0","Type":"ContainerStarted","Data":"7a7aa68afe6564412913a678196110a98c8732ad1c042075412e7a983181c145"} Mar 21 04:34:56 crc kubenswrapper[4923]: I0321 04:34:56.702777 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-db-create-rmgbw" event={"ID":"ad250f90-bc88-40f3-9795-a800d8ac3af0","Type":"ContainerStarted","Data":"b2472c5f0493778fdd0624c0d73903bd1478454bc57f6f340e08967ff105b16f"} Mar 21 04:34:56 crc kubenswrapper[4923]: I0321 04:34:56.733300 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/keystone-12c3-account-create-update-s6f52" podStartSLOduration=1.733280305 podStartE2EDuration="1.733280305s" podCreationTimestamp="2026-03-21 04:34:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:34:56.722278057 +0000 UTC m=+1061.875289184" watchObservedRunningTime="2026-03-21 04:34:56.733280305 +0000 UTC m=+1061.886291402" Mar 21 04:34:56 crc kubenswrapper[4923]: I0321 04:34:56.743151 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/keystone-db-create-rmgbw" podStartSLOduration=1.74313094 podStartE2EDuration="1.74313094s" podCreationTimestamp="2026-03-21 04:34:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:34:56.738951099 +0000 UTC m=+1061.891962186" watchObservedRunningTime="2026-03-21 04:34:56.74313094 +0000 UTC m=+1061.896142037" Mar 21 04:34:57 crc kubenswrapper[4923]: I0321 04:34:57.711428 4923 generic.go:334] "Generic (PLEG): container finished" podID="e2a7b6c7-0d55-48b8-b305-9bde8cc5181f" containerID="b78f5de8b80224be97834c33098373ca3ac57f11a77b993ca92b69d9b81ac40c" exitCode=0 Mar 21 04:34:57 crc kubenswrapper[4923]: I0321 04:34:57.711538 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-12c3-account-create-update-s6f52" event={"ID":"e2a7b6c7-0d55-48b8-b305-9bde8cc5181f","Type":"ContainerDied","Data":"b78f5de8b80224be97834c33098373ca3ac57f11a77b993ca92b69d9b81ac40c"} Mar 21 04:34:57 crc kubenswrapper[4923]: I0321 04:34:57.714436 4923 generic.go:334] "Generic (PLEG): container finished" podID="ad250f90-bc88-40f3-9795-a800d8ac3af0" containerID="7a7aa68afe6564412913a678196110a98c8732ad1c042075412e7a983181c145" exitCode=0 Mar 21 04:34:57 crc kubenswrapper[4923]: I0321 04:34:57.714484 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-db-create-rmgbw" event={"ID":"ad250f90-bc88-40f3-9795-a800d8ac3af0","Type":"ContainerDied","Data":"7a7aa68afe6564412913a678196110a98c8732ad1c042075412e7a983181c145"} Mar 21 04:34:58 crc kubenswrapper[4923]: I0321 04:34:58.449741 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-index-srxvr"] Mar 21 04:34:58 crc kubenswrapper[4923]: I0321 04:34:58.450835 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-index-srxvr" Mar 21 04:34:58 crc kubenswrapper[4923]: I0321 04:34:58.453603 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-index-dockercfg-5k7vr" Mar 21 04:34:58 crc kubenswrapper[4923]: I0321 04:34:58.478151 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-index-srxvr"] Mar 21 04:34:58 crc kubenswrapper[4923]: I0321 04:34:58.607775 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rg68\" (UniqueName: \"kubernetes.io/projected/018728c7-babf-46ea-8189-7381775a750d-kube-api-access-9rg68\") pod \"horizon-operator-index-srxvr\" (UID: \"018728c7-babf-46ea-8189-7381775a750d\") " pod="openstack-operators/horizon-operator-index-srxvr" Mar 21 04:34:58 crc kubenswrapper[4923]: I0321 04:34:58.710003 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rg68\" (UniqueName: \"kubernetes.io/projected/018728c7-babf-46ea-8189-7381775a750d-kube-api-access-9rg68\") pod \"horizon-operator-index-srxvr\" (UID: \"018728c7-babf-46ea-8189-7381775a750d\") " pod="openstack-operators/horizon-operator-index-srxvr" Mar 21 04:34:58 crc kubenswrapper[4923]: I0321 04:34:58.748231 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rg68\" (UniqueName: \"kubernetes.io/projected/018728c7-babf-46ea-8189-7381775a750d-kube-api-access-9rg68\") pod \"horizon-operator-index-srxvr\" (UID: \"018728c7-babf-46ea-8189-7381775a750d\") " pod="openstack-operators/horizon-operator-index-srxvr" Mar 21 04:34:58 crc kubenswrapper[4923]: I0321 04:34:58.784378 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-index-srxvr" Mar 21 04:34:59 crc kubenswrapper[4923]: I0321 04:34:59.140739 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-db-create-rmgbw" Mar 21 04:34:59 crc kubenswrapper[4923]: I0321 04:34:59.145307 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-12c3-account-create-update-s6f52" Mar 21 04:34:59 crc kubenswrapper[4923]: I0321 04:34:59.250407 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx247\" (UniqueName: \"kubernetes.io/projected/e2a7b6c7-0d55-48b8-b305-9bde8cc5181f-kube-api-access-fx247\") pod \"e2a7b6c7-0d55-48b8-b305-9bde8cc5181f\" (UID: \"e2a7b6c7-0d55-48b8-b305-9bde8cc5181f\") " Mar 21 04:34:59 crc kubenswrapper[4923]: I0321 04:34:59.250559 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2q8v\" (UniqueName: \"kubernetes.io/projected/ad250f90-bc88-40f3-9795-a800d8ac3af0-kube-api-access-w2q8v\") pod \"ad250f90-bc88-40f3-9795-a800d8ac3af0\" (UID: \"ad250f90-bc88-40f3-9795-a800d8ac3af0\") " Mar 21 04:34:59 crc kubenswrapper[4923]: I0321 04:34:59.250642 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2a7b6c7-0d55-48b8-b305-9bde8cc5181f-operator-scripts\") pod \"e2a7b6c7-0d55-48b8-b305-9bde8cc5181f\" (UID: \"e2a7b6c7-0d55-48b8-b305-9bde8cc5181f\") " Mar 21 04:34:59 crc kubenswrapper[4923]: I0321 04:34:59.250691 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad250f90-bc88-40f3-9795-a800d8ac3af0-operator-scripts\") pod \"ad250f90-bc88-40f3-9795-a800d8ac3af0\" (UID: \"ad250f90-bc88-40f3-9795-a800d8ac3af0\") " Mar 21 04:34:59 crc kubenswrapper[4923]: I0321 04:34:59.251475 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad250f90-bc88-40f3-9795-a800d8ac3af0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad250f90-bc88-40f3-9795-a800d8ac3af0" (UID: "ad250f90-bc88-40f3-9795-a800d8ac3af0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:34:59 crc kubenswrapper[4923]: I0321 04:34:59.251473 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2a7b6c7-0d55-48b8-b305-9bde8cc5181f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e2a7b6c7-0d55-48b8-b305-9bde8cc5181f" (UID: "e2a7b6c7-0d55-48b8-b305-9bde8cc5181f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:34:59 crc kubenswrapper[4923]: I0321 04:34:59.258001 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2a7b6c7-0d55-48b8-b305-9bde8cc5181f-kube-api-access-fx247" (OuterVolumeSpecName: "kube-api-access-fx247") pod "e2a7b6c7-0d55-48b8-b305-9bde8cc5181f" (UID: "e2a7b6c7-0d55-48b8-b305-9bde8cc5181f"). InnerVolumeSpecName "kube-api-access-fx247". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:34:59 crc kubenswrapper[4923]: I0321 04:34:59.258784 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad250f90-bc88-40f3-9795-a800d8ac3af0-kube-api-access-w2q8v" (OuterVolumeSpecName: "kube-api-access-w2q8v") pod "ad250f90-bc88-40f3-9795-a800d8ac3af0" (UID: "ad250f90-bc88-40f3-9795-a800d8ac3af0"). InnerVolumeSpecName "kube-api-access-w2q8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:34:59 crc kubenswrapper[4923]: I0321 04:34:59.337867 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-index-srxvr"] Mar 21 04:34:59 crc kubenswrapper[4923]: I0321 04:34:59.356854 4923 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad250f90-bc88-40f3-9795-a800d8ac3af0-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 04:34:59 crc kubenswrapper[4923]: I0321 04:34:59.356897 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx247\" (UniqueName: \"kubernetes.io/projected/e2a7b6c7-0d55-48b8-b305-9bde8cc5181f-kube-api-access-fx247\") on node \"crc\" DevicePath \"\"" Mar 21 04:34:59 crc kubenswrapper[4923]: I0321 04:34:59.356915 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2q8v\" (UniqueName: \"kubernetes.io/projected/ad250f90-bc88-40f3-9795-a800d8ac3af0-kube-api-access-w2q8v\") on node \"crc\" DevicePath \"\"" Mar 21 04:34:59 crc kubenswrapper[4923]: I0321 04:34:59.356927 4923 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2a7b6c7-0d55-48b8-b305-9bde8cc5181f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 04:34:59 crc kubenswrapper[4923]: I0321 04:34:59.729082 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-index-srxvr" event={"ID":"018728c7-babf-46ea-8189-7381775a750d","Type":"ContainerStarted","Data":"7daa732758a74b235d64ee61c62e225b17fb38fed6ad3b14a8adcc15f03c84e8"} Mar 21 04:34:59 crc kubenswrapper[4923]: I0321 04:34:59.730397 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-12c3-account-create-update-s6f52" event={"ID":"e2a7b6c7-0d55-48b8-b305-9bde8cc5181f","Type":"ContainerDied","Data":"fb6453c3ba09ddeb2bdf2c07cf08ce4ba892f1cdd823e5e75e069bf289019f4b"} Mar 21 04:34:59 crc kubenswrapper[4923]: I0321 04:34:59.730413 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-12c3-account-create-update-s6f52" Mar 21 04:34:59 crc kubenswrapper[4923]: I0321 04:34:59.730422 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb6453c3ba09ddeb2bdf2c07cf08ce4ba892f1cdd823e5e75e069bf289019f4b" Mar 21 04:34:59 crc kubenswrapper[4923]: I0321 04:34:59.731958 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-db-create-rmgbw" event={"ID":"ad250f90-bc88-40f3-9795-a800d8ac3af0","Type":"ContainerDied","Data":"b2472c5f0493778fdd0624c0d73903bd1478454bc57f6f340e08967ff105b16f"} Mar 21 04:34:59 crc kubenswrapper[4923]: I0321 04:34:59.731978 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2472c5f0493778fdd0624c0d73903bd1478454bc57f6f340e08967ff105b16f" Mar 21 04:34:59 crc kubenswrapper[4923]: I0321 04:34:59.732048 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-db-create-rmgbw" Mar 21 04:35:02 crc kubenswrapper[4923]: I0321 04:35:02.173982 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:35:02 crc kubenswrapper[4923]: I0321 04:35:02.771592 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-index-srxvr" event={"ID":"018728c7-babf-46ea-8189-7381775a750d","Type":"ContainerStarted","Data":"74b618da67dba9a204f9348d53c1f90aeb9186ee8800307b5cafd74d64ea0e66"} Mar 21 04:35:02 crc kubenswrapper[4923]: I0321 04:35:02.787150 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-index-srxvr" podStartSLOduration=2.026067549 podStartE2EDuration="4.787135783s" podCreationTimestamp="2026-03-21 04:34:58 +0000 UTC" firstStartedPulling="2026-03-21 04:34:59.339946965 +0000 UTC m=+1064.492958082" lastFinishedPulling="2026-03-21 04:35:02.101015229 +0000 UTC m=+1067.254026316" observedRunningTime="2026-03-21 04:35:02.784234809 +0000 UTC m=+1067.937245906" watchObservedRunningTime="2026-03-21 04:35:02.787135783 +0000 UTC m=+1067.940146870" Mar 21 04:35:02 crc kubenswrapper[4923]: I0321 04:35:02.825393 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/keystone-db-sync-mh7hb"] Mar 21 04:35:02 crc kubenswrapper[4923]: E0321 04:35:02.825735 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad250f90-bc88-40f3-9795-a800d8ac3af0" containerName="mariadb-database-create" Mar 21 04:35:02 crc kubenswrapper[4923]: I0321 04:35:02.825766 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad250f90-bc88-40f3-9795-a800d8ac3af0" containerName="mariadb-database-create" Mar 21 04:35:02 crc kubenswrapper[4923]: E0321 04:35:02.825817 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a7b6c7-0d55-48b8-b305-9bde8cc5181f" containerName="mariadb-account-create-update" Mar 21 04:35:02 crc kubenswrapper[4923]: I0321 04:35:02.825829 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a7b6c7-0d55-48b8-b305-9bde8cc5181f" containerName="mariadb-account-create-update" Mar 21 04:35:02 crc kubenswrapper[4923]: I0321 04:35:02.826033 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad250f90-bc88-40f3-9795-a800d8ac3af0" containerName="mariadb-database-create" Mar 21 04:35:02 crc kubenswrapper[4923]: I0321 04:35:02.826078 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a7b6c7-0d55-48b8-b305-9bde8cc5181f" containerName="mariadb-account-create-update" Mar 21 04:35:02 crc kubenswrapper[4923]: I0321 04:35:02.826701 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-db-sync-mh7hb" Mar 21 04:35:02 crc kubenswrapper[4923]: I0321 04:35:02.828881 4923 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"keystone" Mar 21 04:35:02 crc kubenswrapper[4923]: I0321 04:35:02.829886 4923 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"keystone-config-data" Mar 21 04:35:02 crc kubenswrapper[4923]: I0321 04:35:02.830095 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/keystone-db-sync-mh7hb"] Mar 21 04:35:02 crc kubenswrapper[4923]: I0321 04:35:02.830159 4923 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"keystone-scripts" Mar 21 04:35:02 crc kubenswrapper[4923]: I0321 04:35:02.830312 4923 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"keystone-keystone-dockercfg-hwcf7" Mar 21 04:35:02 crc kubenswrapper[4923]: I0321 04:35:02.856283 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/horizon-operator-index-srxvr"] Mar 21 04:35:02 crc kubenswrapper[4923]: I0321 04:35:02.967566 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwfct\" (UniqueName: \"kubernetes.io/projected/82d33d3e-8d71-4a21-bc75-51d7902603ec-kube-api-access-hwfct\") pod \"keystone-db-sync-mh7hb\" (UID: \"82d33d3e-8d71-4a21-bc75-51d7902603ec\") " pod="horizon-kuttl-tests/keystone-db-sync-mh7hb" Mar 21 04:35:02 crc kubenswrapper[4923]: I0321 04:35:02.967958 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82d33d3e-8d71-4a21-bc75-51d7902603ec-config-data\") pod \"keystone-db-sync-mh7hb\" (UID: \"82d33d3e-8d71-4a21-bc75-51d7902603ec\") " pod="horizon-kuttl-tests/keystone-db-sync-mh7hb" Mar 21 04:35:03 crc kubenswrapper[4923]: I0321 04:35:03.069246 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82d33d3e-8d71-4a21-bc75-51d7902603ec-config-data\") pod \"keystone-db-sync-mh7hb\" (UID: \"82d33d3e-8d71-4a21-bc75-51d7902603ec\") " pod="horizon-kuttl-tests/keystone-db-sync-mh7hb" Mar 21 04:35:03 crc kubenswrapper[4923]: I0321 04:35:03.069356 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwfct\" (UniqueName: \"kubernetes.io/projected/82d33d3e-8d71-4a21-bc75-51d7902603ec-kube-api-access-hwfct\") pod \"keystone-db-sync-mh7hb\" (UID: \"82d33d3e-8d71-4a21-bc75-51d7902603ec\") " pod="horizon-kuttl-tests/keystone-db-sync-mh7hb" Mar 21 04:35:03 crc kubenswrapper[4923]: I0321 04:35:03.088282 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82d33d3e-8d71-4a21-bc75-51d7902603ec-config-data\") pod \"keystone-db-sync-mh7hb\" (UID: \"82d33d3e-8d71-4a21-bc75-51d7902603ec\") " pod="horizon-kuttl-tests/keystone-db-sync-mh7hb" Mar 21 04:35:03 crc kubenswrapper[4923]: I0321 04:35:03.097893 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwfct\" (UniqueName: \"kubernetes.io/projected/82d33d3e-8d71-4a21-bc75-51d7902603ec-kube-api-access-hwfct\") pod \"keystone-db-sync-mh7hb\" (UID: \"82d33d3e-8d71-4a21-bc75-51d7902603ec\") " pod="horizon-kuttl-tests/keystone-db-sync-mh7hb" Mar 21 04:35:03 crc kubenswrapper[4923]: I0321 04:35:03.145216 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-db-sync-mh7hb" Mar 21 04:35:03 crc kubenswrapper[4923]: I0321 04:35:03.447993 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-index-zldnz"] Mar 21 04:35:03 crc kubenswrapper[4923]: I0321 04:35:03.449983 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-index-zldnz" Mar 21 04:35:03 crc kubenswrapper[4923]: I0321 04:35:03.455187 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-index-zldnz"] Mar 21 04:35:03 crc kubenswrapper[4923]: I0321 04:35:03.581676 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96qn6\" (UniqueName: \"kubernetes.io/projected/fa331624-aac7-4683-abe0-7e0f37b4b121-kube-api-access-96qn6\") pod \"horizon-operator-index-zldnz\" (UID: \"fa331624-aac7-4683-abe0-7e0f37b4b121\") " pod="openstack-operators/horizon-operator-index-zldnz" Mar 21 04:35:03 crc kubenswrapper[4923]: I0321 04:35:03.593001 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/keystone-db-sync-mh7hb"] Mar 21 04:35:03 crc kubenswrapper[4923]: W0321 04:35:03.597494 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82d33d3e_8d71_4a21_bc75_51d7902603ec.slice/crio-d55ab661dc8beccaf47a7311080b34022e5814c5ab8cb6a303d558ba61e9a8a3 WatchSource:0}: Error finding container d55ab661dc8beccaf47a7311080b34022e5814c5ab8cb6a303d558ba61e9a8a3: Status 404 returned error can't find the container with id d55ab661dc8beccaf47a7311080b34022e5814c5ab8cb6a303d558ba61e9a8a3 Mar 21 04:35:03 crc kubenswrapper[4923]: I0321 04:35:03.682777 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96qn6\" (UniqueName: \"kubernetes.io/projected/fa331624-aac7-4683-abe0-7e0f37b4b121-kube-api-access-96qn6\") pod \"horizon-operator-index-zldnz\" (UID: \"fa331624-aac7-4683-abe0-7e0f37b4b121\") " pod="openstack-operators/horizon-operator-index-zldnz" Mar 21 04:35:03 crc kubenswrapper[4923]: I0321 04:35:03.706154 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96qn6\" (UniqueName: \"kubernetes.io/projected/fa331624-aac7-4683-abe0-7e0f37b4b121-kube-api-access-96qn6\") pod \"horizon-operator-index-zldnz\" (UID: \"fa331624-aac7-4683-abe0-7e0f37b4b121\") " pod="openstack-operators/horizon-operator-index-zldnz" Mar 21 04:35:03 crc kubenswrapper[4923]: I0321 04:35:03.768295 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-index-zldnz" Mar 21 04:35:03 crc kubenswrapper[4923]: I0321 04:35:03.789422 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-db-sync-mh7hb" event={"ID":"82d33d3e-8d71-4a21-bc75-51d7902603ec","Type":"ContainerStarted","Data":"d55ab661dc8beccaf47a7311080b34022e5814c5ab8cb6a303d558ba61e9a8a3"} Mar 21 04:35:04 crc kubenswrapper[4923]: I0321 04:35:04.084485 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-index-zldnz"] Mar 21 04:35:04 crc kubenswrapper[4923]: W0321 04:35:04.090835 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa331624_aac7_4683_abe0_7e0f37b4b121.slice/crio-f76367a85a99a73fe0d13a5af52e9df0e981daa4d6fecec3f1f31f495a4566c9 WatchSource:0}: Error finding container f76367a85a99a73fe0d13a5af52e9df0e981daa4d6fecec3f1f31f495a4566c9: Status 404 returned error can't find the container with id f76367a85a99a73fe0d13a5af52e9df0e981daa4d6fecec3f1f31f495a4566c9 Mar 21 04:35:04 crc kubenswrapper[4923]: I0321 04:35:04.796420 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-index-zldnz" event={"ID":"fa331624-aac7-4683-abe0-7e0f37b4b121","Type":"ContainerStarted","Data":"28aa09ba1ea8eee400bbbd93f70337c969b23d60b776918623fe6be8b69371fb"} Mar 21 04:35:04 crc kubenswrapper[4923]: I0321 04:35:04.796755 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-index-zldnz" event={"ID":"fa331624-aac7-4683-abe0-7e0f37b4b121","Type":"ContainerStarted","Data":"f76367a85a99a73fe0d13a5af52e9df0e981daa4d6fecec3f1f31f495a4566c9"} Mar 21 04:35:04 crc kubenswrapper[4923]: I0321 04:35:04.796508 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/horizon-operator-index-srxvr" podUID="018728c7-babf-46ea-8189-7381775a750d" containerName="registry-server" containerID="cri-o://74b618da67dba9a204f9348d53c1f90aeb9186ee8800307b5cafd74d64ea0e66" gracePeriod=2 Mar 21 04:35:04 crc kubenswrapper[4923]: I0321 04:35:04.822435 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-index-zldnz" podStartSLOduration=1.774330976 podStartE2EDuration="1.822414076s" podCreationTimestamp="2026-03-21 04:35:03 +0000 UTC" firstStartedPulling="2026-03-21 04:35:04.094463814 +0000 UTC m=+1069.247474901" lastFinishedPulling="2026-03-21 04:35:04.142546914 +0000 UTC m=+1069.295558001" observedRunningTime="2026-03-21 04:35:04.813249251 +0000 UTC m=+1069.966260348" watchObservedRunningTime="2026-03-21 04:35:04.822414076 +0000 UTC m=+1069.975425163" Mar 21 04:35:05 crc kubenswrapper[4923]: I0321 04:35:05.236830 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-index-srxvr" Mar 21 04:35:05 crc kubenswrapper[4923]: I0321 04:35:05.402335 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rg68\" (UniqueName: \"kubernetes.io/projected/018728c7-babf-46ea-8189-7381775a750d-kube-api-access-9rg68\") pod \"018728c7-babf-46ea-8189-7381775a750d\" (UID: \"018728c7-babf-46ea-8189-7381775a750d\") " Mar 21 04:35:05 crc kubenswrapper[4923]: I0321 04:35:05.409553 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/018728c7-babf-46ea-8189-7381775a750d-kube-api-access-9rg68" (OuterVolumeSpecName: "kube-api-access-9rg68") pod "018728c7-babf-46ea-8189-7381775a750d" (UID: "018728c7-babf-46ea-8189-7381775a750d"). InnerVolumeSpecName "kube-api-access-9rg68". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:35:05 crc kubenswrapper[4923]: I0321 04:35:05.505368 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rg68\" (UniqueName: \"kubernetes.io/projected/018728c7-babf-46ea-8189-7381775a750d-kube-api-access-9rg68\") on node \"crc\" DevicePath \"\"" Mar 21 04:35:05 crc kubenswrapper[4923]: I0321 04:35:05.804185 4923 generic.go:334] "Generic (PLEG): container finished" podID="018728c7-babf-46ea-8189-7381775a750d" containerID="74b618da67dba9a204f9348d53c1f90aeb9186ee8800307b5cafd74d64ea0e66" exitCode=0 Mar 21 04:35:05 crc kubenswrapper[4923]: I0321 04:35:05.805095 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-index-srxvr" Mar 21 04:35:05 crc kubenswrapper[4923]: I0321 04:35:05.805403 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-index-srxvr" event={"ID":"018728c7-babf-46ea-8189-7381775a750d","Type":"ContainerDied","Data":"74b618da67dba9a204f9348d53c1f90aeb9186ee8800307b5cafd74d64ea0e66"} Mar 21 04:35:05 crc kubenswrapper[4923]: I0321 04:35:05.805454 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-index-srxvr" event={"ID":"018728c7-babf-46ea-8189-7381775a750d","Type":"ContainerDied","Data":"7daa732758a74b235d64ee61c62e225b17fb38fed6ad3b14a8adcc15f03c84e8"} Mar 21 04:35:05 crc kubenswrapper[4923]: I0321 04:35:05.805470 4923 scope.go:117] "RemoveContainer" containerID="74b618da67dba9a204f9348d53c1f90aeb9186ee8800307b5cafd74d64ea0e66" Mar 21 04:35:05 crc kubenswrapper[4923]: I0321 04:35:05.830985 4923 scope.go:117] "RemoveContainer" containerID="74b618da67dba9a204f9348d53c1f90aeb9186ee8800307b5cafd74d64ea0e66" Mar 21 04:35:05 crc kubenswrapper[4923]: E0321 04:35:05.831583 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74b618da67dba9a204f9348d53c1f90aeb9186ee8800307b5cafd74d64ea0e66\": container with ID starting with 74b618da67dba9a204f9348d53c1f90aeb9186ee8800307b5cafd74d64ea0e66 not found: ID does not exist" containerID="74b618da67dba9a204f9348d53c1f90aeb9186ee8800307b5cafd74d64ea0e66" Mar 21 04:35:05 crc kubenswrapper[4923]: I0321 04:35:05.831609 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74b618da67dba9a204f9348d53c1f90aeb9186ee8800307b5cafd74d64ea0e66"} err="failed to get container status \"74b618da67dba9a204f9348d53c1f90aeb9186ee8800307b5cafd74d64ea0e66\": rpc error: code = NotFound desc = could not find container \"74b618da67dba9a204f9348d53c1f90aeb9186ee8800307b5cafd74d64ea0e66\": container with ID starting with 74b618da67dba9a204f9348d53c1f90aeb9186ee8800307b5cafd74d64ea0e66 not found: ID does not exist" Mar 21 04:35:05 crc kubenswrapper[4923]: I0321 04:35:05.844947 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/horizon-operator-index-srxvr"] Mar 21 04:35:05 crc kubenswrapper[4923]: I0321 04:35:05.855414 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/horizon-operator-index-srxvr"] Mar 21 04:35:06 crc kubenswrapper[4923]: I0321 04:35:06.378069 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="018728c7-babf-46ea-8189-7381775a750d" path="/var/lib/kubelet/pods/018728c7-babf-46ea-8189-7381775a750d/volumes" Mar 21 04:35:13 crc kubenswrapper[4923]: I0321 04:35:13.769958 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-index-zldnz" Mar 21 04:35:13 crc kubenswrapper[4923]: I0321 04:35:13.770631 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/horizon-operator-index-zldnz" Mar 21 04:35:13 crc kubenswrapper[4923]: I0321 04:35:13.794262 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/horizon-operator-index-zldnz" Mar 21 04:35:13 crc kubenswrapper[4923]: I0321 04:35:13.910462 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-index-zldnz" Mar 21 04:35:14 crc kubenswrapper[4923]: I0321 04:35:14.893162 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-db-sync-mh7hb" event={"ID":"82d33d3e-8d71-4a21-bc75-51d7902603ec","Type":"ContainerStarted","Data":"cd42bb74b94bd264bda475db0dfae54e95d48c2a3f421d2d45226c3b5c500e2d"} Mar 21 04:35:14 crc kubenswrapper[4923]: I0321 04:35:14.923807 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/keystone-db-sync-mh7hb" podStartSLOduration=2.538728853 podStartE2EDuration="12.923789585s" podCreationTimestamp="2026-03-21 04:35:02 +0000 UTC" firstStartedPulling="2026-03-21 04:35:03.599145098 +0000 UTC m=+1068.752156195" lastFinishedPulling="2026-03-21 04:35:13.98420583 +0000 UTC m=+1079.137216927" observedRunningTime="2026-03-21 04:35:14.919404308 +0000 UTC m=+1080.072415435" watchObservedRunningTime="2026-03-21 04:35:14.923789585 +0000 UTC m=+1080.076800682" Mar 21 04:35:16 crc kubenswrapper[4923]: I0321 04:35:16.492134 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5"] Mar 21 04:35:16 crc kubenswrapper[4923]: E0321 04:35:16.493063 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="018728c7-babf-46ea-8189-7381775a750d" containerName="registry-server" Mar 21 04:35:16 crc kubenswrapper[4923]: I0321 04:35:16.493097 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="018728c7-babf-46ea-8189-7381775a750d" containerName="registry-server" Mar 21 04:35:16 crc kubenswrapper[4923]: I0321 04:35:16.493448 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="018728c7-babf-46ea-8189-7381775a750d" containerName="registry-server" Mar 21 04:35:16 crc kubenswrapper[4923]: I0321 04:35:16.495776 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5" Mar 21 04:35:16 crc kubenswrapper[4923]: I0321 04:35:16.497968 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-vcb7s" Mar 21 04:35:16 crc kubenswrapper[4923]: I0321 04:35:16.500135 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5"] Mar 21 04:35:16 crc kubenswrapper[4923]: I0321 04:35:16.573021 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc3ac6f5-8825-4713-b073-ae95374bcd0e-bundle\") pod \"e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5\" (UID: \"fc3ac6f5-8825-4713-b073-ae95374bcd0e\") " pod="openstack-operators/e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5" Mar 21 04:35:16 crc kubenswrapper[4923]: I0321 04:35:16.573112 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clc58\" (UniqueName: \"kubernetes.io/projected/fc3ac6f5-8825-4713-b073-ae95374bcd0e-kube-api-access-clc58\") pod \"e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5\" (UID: \"fc3ac6f5-8825-4713-b073-ae95374bcd0e\") " pod="openstack-operators/e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5" Mar 21 04:35:16 crc kubenswrapper[4923]: I0321 04:35:16.573198 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc3ac6f5-8825-4713-b073-ae95374bcd0e-util\") pod \"e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5\" (UID: \"fc3ac6f5-8825-4713-b073-ae95374bcd0e\") " pod="openstack-operators/e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5" Mar 21 04:35:16 crc kubenswrapper[4923]: I0321 04:35:16.674264 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clc58\" (UniqueName: \"kubernetes.io/projected/fc3ac6f5-8825-4713-b073-ae95374bcd0e-kube-api-access-clc58\") pod \"e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5\" (UID: \"fc3ac6f5-8825-4713-b073-ae95374bcd0e\") " pod="openstack-operators/e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5" Mar 21 04:35:16 crc kubenswrapper[4923]: I0321 04:35:16.674719 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc3ac6f5-8825-4713-b073-ae95374bcd0e-util\") pod \"e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5\" (UID: \"fc3ac6f5-8825-4713-b073-ae95374bcd0e\") " pod="openstack-operators/e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5" Mar 21 04:35:16 crc kubenswrapper[4923]: I0321 04:35:16.674772 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc3ac6f5-8825-4713-b073-ae95374bcd0e-bundle\") pod \"e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5\" (UID: \"fc3ac6f5-8825-4713-b073-ae95374bcd0e\") " pod="openstack-operators/e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5" Mar 21 04:35:16 crc kubenswrapper[4923]: I0321 04:35:16.675530 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc3ac6f5-8825-4713-b073-ae95374bcd0e-util\") pod \"e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5\" (UID: \"fc3ac6f5-8825-4713-b073-ae95374bcd0e\") " pod="openstack-operators/e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5" Mar 21 04:35:16 crc kubenswrapper[4923]: I0321 04:35:16.679483 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc3ac6f5-8825-4713-b073-ae95374bcd0e-bundle\") pod \"e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5\" (UID: \"fc3ac6f5-8825-4713-b073-ae95374bcd0e\") " pod="openstack-operators/e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5" Mar 21 04:35:16 crc kubenswrapper[4923]: I0321 04:35:16.722260 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clc58\" (UniqueName: \"kubernetes.io/projected/fc3ac6f5-8825-4713-b073-ae95374bcd0e-kube-api-access-clc58\") pod \"e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5\" (UID: \"fc3ac6f5-8825-4713-b073-ae95374bcd0e\") " pod="openstack-operators/e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5" Mar 21 04:35:16 crc kubenswrapper[4923]: I0321 04:35:16.831377 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5" Mar 21 04:35:17 crc kubenswrapper[4923]: I0321 04:35:17.290644 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5"] Mar 21 04:35:17 crc kubenswrapper[4923]: W0321 04:35:17.307228 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc3ac6f5_8825_4713_b073_ae95374bcd0e.slice/crio-3fe57660b5cf5dd2f95d532d846ccf39e124b9a45b09cffd190932f2a347ae7a WatchSource:0}: Error finding container 3fe57660b5cf5dd2f95d532d846ccf39e124b9a45b09cffd190932f2a347ae7a: Status 404 returned error can't find the container with id 3fe57660b5cf5dd2f95d532d846ccf39e124b9a45b09cffd190932f2a347ae7a Mar 21 04:35:17 crc kubenswrapper[4923]: I0321 04:35:17.924998 4923 generic.go:334] "Generic (PLEG): container finished" podID="82d33d3e-8d71-4a21-bc75-51d7902603ec" containerID="cd42bb74b94bd264bda475db0dfae54e95d48c2a3f421d2d45226c3b5c500e2d" exitCode=0 Mar 21 04:35:17 crc kubenswrapper[4923]: I0321 04:35:17.925113 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-db-sync-mh7hb" event={"ID":"82d33d3e-8d71-4a21-bc75-51d7902603ec","Type":"ContainerDied","Data":"cd42bb74b94bd264bda475db0dfae54e95d48c2a3f421d2d45226c3b5c500e2d"} Mar 21 04:35:17 crc kubenswrapper[4923]: I0321 04:35:17.930896 4923 generic.go:334] "Generic (PLEG): container finished" podID="fc3ac6f5-8825-4713-b073-ae95374bcd0e" containerID="c4bf1cf47293984749e6e23a25a8cb14f47dc6e80ff86976ff74803995b68599" exitCode=0 Mar 21 04:35:17 crc kubenswrapper[4923]: I0321 04:35:17.930962 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5" event={"ID":"fc3ac6f5-8825-4713-b073-ae95374bcd0e","Type":"ContainerDied","Data":"c4bf1cf47293984749e6e23a25a8cb14f47dc6e80ff86976ff74803995b68599"} Mar 21 04:35:17 crc kubenswrapper[4923]: I0321 04:35:17.931006 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5" event={"ID":"fc3ac6f5-8825-4713-b073-ae95374bcd0e","Type":"ContainerStarted","Data":"3fe57660b5cf5dd2f95d532d846ccf39e124b9a45b09cffd190932f2a347ae7a"} Mar 21 04:35:18 crc kubenswrapper[4923]: I0321 04:35:18.942165 4923 generic.go:334] "Generic (PLEG): container finished" podID="fc3ac6f5-8825-4713-b073-ae95374bcd0e" containerID="0172b546f7be9c52e576ba33184e91ecff00885593896cc67d4627d03c9cf1a9" exitCode=0 Mar 21 04:35:18 crc kubenswrapper[4923]: I0321 04:35:18.943508 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5" event={"ID":"fc3ac6f5-8825-4713-b073-ae95374bcd0e","Type":"ContainerDied","Data":"0172b546f7be9c52e576ba33184e91ecff00885593896cc67d4627d03c9cf1a9"} Mar 21 04:35:19 crc kubenswrapper[4923]: I0321 04:35:19.356077 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-db-sync-mh7hb" Mar 21 04:35:19 crc kubenswrapper[4923]: I0321 04:35:19.514956 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82d33d3e-8d71-4a21-bc75-51d7902603ec-config-data\") pod \"82d33d3e-8d71-4a21-bc75-51d7902603ec\" (UID: \"82d33d3e-8d71-4a21-bc75-51d7902603ec\") " Mar 21 04:35:19 crc kubenswrapper[4923]: I0321 04:35:19.515004 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwfct\" (UniqueName: \"kubernetes.io/projected/82d33d3e-8d71-4a21-bc75-51d7902603ec-kube-api-access-hwfct\") pod \"82d33d3e-8d71-4a21-bc75-51d7902603ec\" (UID: \"82d33d3e-8d71-4a21-bc75-51d7902603ec\") " Mar 21 04:35:19 crc kubenswrapper[4923]: I0321 04:35:19.527549 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82d33d3e-8d71-4a21-bc75-51d7902603ec-kube-api-access-hwfct" (OuterVolumeSpecName: "kube-api-access-hwfct") pod "82d33d3e-8d71-4a21-bc75-51d7902603ec" (UID: "82d33d3e-8d71-4a21-bc75-51d7902603ec"). InnerVolumeSpecName "kube-api-access-hwfct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:35:19 crc kubenswrapper[4923]: I0321 04:35:19.572703 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82d33d3e-8d71-4a21-bc75-51d7902603ec-config-data" (OuterVolumeSpecName: "config-data") pod "82d33d3e-8d71-4a21-bc75-51d7902603ec" (UID: "82d33d3e-8d71-4a21-bc75-51d7902603ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:35:19 crc kubenswrapper[4923]: I0321 04:35:19.617625 4923 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82d33d3e-8d71-4a21-bc75-51d7902603ec-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 04:35:19 crc kubenswrapper[4923]: I0321 04:35:19.617777 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwfct\" (UniqueName: \"kubernetes.io/projected/82d33d3e-8d71-4a21-bc75-51d7902603ec-kube-api-access-hwfct\") on node \"crc\" DevicePath \"\"" Mar 21 04:35:19 crc kubenswrapper[4923]: I0321 04:35:19.952060 4923 generic.go:334] "Generic (PLEG): container finished" podID="fc3ac6f5-8825-4713-b073-ae95374bcd0e" containerID="fc89be06fb80c29622892c0d9a3553de526cba4252622588f3ff8c30d0cba281" exitCode=0 Mar 21 04:35:19 crc kubenswrapper[4923]: I0321 04:35:19.952129 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5" event={"ID":"fc3ac6f5-8825-4713-b073-ae95374bcd0e","Type":"ContainerDied","Data":"fc89be06fb80c29622892c0d9a3553de526cba4252622588f3ff8c30d0cba281"} Mar 21 04:35:19 crc kubenswrapper[4923]: I0321 04:35:19.955053 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-db-sync-mh7hb" event={"ID":"82d33d3e-8d71-4a21-bc75-51d7902603ec","Type":"ContainerDied","Data":"d55ab661dc8beccaf47a7311080b34022e5814c5ab8cb6a303d558ba61e9a8a3"} Mar 21 04:35:19 crc kubenswrapper[4923]: I0321 04:35:19.955114 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d55ab661dc8beccaf47a7311080b34022e5814c5ab8cb6a303d558ba61e9a8a3" Mar 21 04:35:19 crc kubenswrapper[4923]: I0321 04:35:19.955123 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-db-sync-mh7hb" Mar 21 04:35:20 crc kubenswrapper[4923]: I0321 04:35:20.195381 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/keystone-bootstrap-tq59j"] Mar 21 04:35:20 crc kubenswrapper[4923]: E0321 04:35:20.195686 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d33d3e-8d71-4a21-bc75-51d7902603ec" containerName="keystone-db-sync" Mar 21 04:35:20 crc kubenswrapper[4923]: I0321 04:35:20.195707 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d33d3e-8d71-4a21-bc75-51d7902603ec" containerName="keystone-db-sync" Mar 21 04:35:20 crc kubenswrapper[4923]: I0321 04:35:20.195849 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d33d3e-8d71-4a21-bc75-51d7902603ec" containerName="keystone-db-sync" Mar 21 04:35:20 crc kubenswrapper[4923]: I0321 04:35:20.196361 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-bootstrap-tq59j" Mar 21 04:35:20 crc kubenswrapper[4923]: I0321 04:35:20.198748 4923 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"keystone" Mar 21 04:35:20 crc kubenswrapper[4923]: I0321 04:35:20.199587 4923 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"keystone-config-data" Mar 21 04:35:20 crc kubenswrapper[4923]: I0321 04:35:20.199840 4923 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"osp-secret" Mar 21 04:35:20 crc kubenswrapper[4923]: I0321 04:35:20.200129 4923 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"keystone-keystone-dockercfg-hwcf7" Mar 21 04:35:20 crc kubenswrapper[4923]: I0321 04:35:20.200528 4923 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"keystone-scripts" Mar 21 04:35:20 crc kubenswrapper[4923]: I0321 04:35:20.223678 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/keystone-bootstrap-tq59j"] Mar 21 04:35:20 crc kubenswrapper[4923]: I0321 04:35:20.346466 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-scripts\") pod \"keystone-bootstrap-tq59j\" (UID: \"d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2\") " pod="horizon-kuttl-tests/keystone-bootstrap-tq59j" Mar 21 04:35:20 crc kubenswrapper[4923]: I0321 04:35:20.346564 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-fernet-keys\") pod \"keystone-bootstrap-tq59j\" (UID: \"d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2\") " pod="horizon-kuttl-tests/keystone-bootstrap-tq59j" Mar 21 04:35:20 crc kubenswrapper[4923]: I0321 04:35:20.346595 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhbv5\" (UniqueName: \"kubernetes.io/projected/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-kube-api-access-nhbv5\") pod \"keystone-bootstrap-tq59j\" (UID: \"d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2\") " pod="horizon-kuttl-tests/keystone-bootstrap-tq59j" Mar 21 04:35:20 crc kubenswrapper[4923]: I0321 04:35:20.346641 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-config-data\") pod \"keystone-bootstrap-tq59j\" (UID: \"d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2\") " pod="horizon-kuttl-tests/keystone-bootstrap-tq59j" Mar 21 04:35:20 crc kubenswrapper[4923]: I0321 04:35:20.346761 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-credential-keys\") pod \"keystone-bootstrap-tq59j\" (UID: \"d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2\") " pod="horizon-kuttl-tests/keystone-bootstrap-tq59j" Mar 21 04:35:20 crc kubenswrapper[4923]: I0321 04:35:20.447916 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhbv5\" (UniqueName: \"kubernetes.io/projected/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-kube-api-access-nhbv5\") pod \"keystone-bootstrap-tq59j\" (UID: \"d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2\") " pod="horizon-kuttl-tests/keystone-bootstrap-tq59j" Mar 21 04:35:20 crc kubenswrapper[4923]: I0321 04:35:20.447971 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-config-data\") pod \"keystone-bootstrap-tq59j\" (UID: \"d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2\") " pod="horizon-kuttl-tests/keystone-bootstrap-tq59j" Mar 21 04:35:20 crc kubenswrapper[4923]: I0321 04:35:20.448014 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-credential-keys\") pod \"keystone-bootstrap-tq59j\" (UID: \"d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2\") " pod="horizon-kuttl-tests/keystone-bootstrap-tq59j" Mar 21 04:35:20 crc kubenswrapper[4923]: I0321 04:35:20.448069 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-scripts\") pod \"keystone-bootstrap-tq59j\" (UID: \"d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2\") " pod="horizon-kuttl-tests/keystone-bootstrap-tq59j" Mar 21 04:35:20 crc kubenswrapper[4923]: I0321 04:35:20.448095 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-fernet-keys\") pod \"keystone-bootstrap-tq59j\" (UID: \"d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2\") " pod="horizon-kuttl-tests/keystone-bootstrap-tq59j" Mar 21 04:35:20 crc kubenswrapper[4923]: I0321 04:35:20.454233 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-fernet-keys\") pod \"keystone-bootstrap-tq59j\" (UID: \"d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2\") " pod="horizon-kuttl-tests/keystone-bootstrap-tq59j" Mar 21 04:35:20 crc kubenswrapper[4923]: I0321 04:35:20.455255 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-credential-keys\") pod \"keystone-bootstrap-tq59j\" (UID: \"d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2\") " pod="horizon-kuttl-tests/keystone-bootstrap-tq59j" Mar 21 04:35:20 crc kubenswrapper[4923]: I0321 04:35:20.455397 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-scripts\") pod \"keystone-bootstrap-tq59j\" (UID: \"d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2\") " pod="horizon-kuttl-tests/keystone-bootstrap-tq59j" Mar 21 04:35:20 crc kubenswrapper[4923]: I0321 04:35:20.457014 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-config-data\") pod \"keystone-bootstrap-tq59j\" (UID: \"d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2\") " pod="horizon-kuttl-tests/keystone-bootstrap-tq59j" Mar 21 04:35:20 crc kubenswrapper[4923]: I0321 04:35:20.480111 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhbv5\" (UniqueName: \"kubernetes.io/projected/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-kube-api-access-nhbv5\") pod \"keystone-bootstrap-tq59j\" (UID: \"d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2\") " pod="horizon-kuttl-tests/keystone-bootstrap-tq59j" Mar 21 04:35:20 crc kubenswrapper[4923]: I0321 04:35:20.519480 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-bootstrap-tq59j" Mar 21 04:35:21 crc kubenswrapper[4923]: I0321 04:35:21.015409 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/keystone-bootstrap-tq59j"] Mar 21 04:35:21 crc kubenswrapper[4923]: I0321 04:35:21.308606 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5" Mar 21 04:35:21 crc kubenswrapper[4923]: I0321 04:35:21.463033 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc3ac6f5-8825-4713-b073-ae95374bcd0e-bundle\") pod \"fc3ac6f5-8825-4713-b073-ae95374bcd0e\" (UID: \"fc3ac6f5-8825-4713-b073-ae95374bcd0e\") " Mar 21 04:35:21 crc kubenswrapper[4923]: I0321 04:35:21.463199 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clc58\" (UniqueName: \"kubernetes.io/projected/fc3ac6f5-8825-4713-b073-ae95374bcd0e-kube-api-access-clc58\") pod \"fc3ac6f5-8825-4713-b073-ae95374bcd0e\" (UID: \"fc3ac6f5-8825-4713-b073-ae95374bcd0e\") " Mar 21 04:35:21 crc kubenswrapper[4923]: I0321 04:35:21.463240 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc3ac6f5-8825-4713-b073-ae95374bcd0e-util\") pod \"fc3ac6f5-8825-4713-b073-ae95374bcd0e\" (UID: \"fc3ac6f5-8825-4713-b073-ae95374bcd0e\") " Mar 21 04:35:21 crc kubenswrapper[4923]: I0321 04:35:21.463807 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc3ac6f5-8825-4713-b073-ae95374bcd0e-bundle" (OuterVolumeSpecName: "bundle") pod "fc3ac6f5-8825-4713-b073-ae95374bcd0e" (UID: "fc3ac6f5-8825-4713-b073-ae95374bcd0e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:35:21 crc kubenswrapper[4923]: I0321 04:35:21.468130 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc3ac6f5-8825-4713-b073-ae95374bcd0e-kube-api-access-clc58" (OuterVolumeSpecName: "kube-api-access-clc58") pod "fc3ac6f5-8825-4713-b073-ae95374bcd0e" (UID: "fc3ac6f5-8825-4713-b073-ae95374bcd0e"). InnerVolumeSpecName "kube-api-access-clc58". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:35:21 crc kubenswrapper[4923]: I0321 04:35:21.476845 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc3ac6f5-8825-4713-b073-ae95374bcd0e-util" (OuterVolumeSpecName: "util") pod "fc3ac6f5-8825-4713-b073-ae95374bcd0e" (UID: "fc3ac6f5-8825-4713-b073-ae95374bcd0e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:35:21 crc kubenswrapper[4923]: I0321 04:35:21.564802 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clc58\" (UniqueName: \"kubernetes.io/projected/fc3ac6f5-8825-4713-b073-ae95374bcd0e-kube-api-access-clc58\") on node \"crc\" DevicePath \"\"" Mar 21 04:35:21 crc kubenswrapper[4923]: I0321 04:35:21.564860 4923 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc3ac6f5-8825-4713-b073-ae95374bcd0e-util\") on node \"crc\" DevicePath \"\"" Mar 21 04:35:21 crc kubenswrapper[4923]: I0321 04:35:21.564882 4923 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc3ac6f5-8825-4713-b073-ae95374bcd0e-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 04:35:21 crc kubenswrapper[4923]: I0321 04:35:21.975643 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-bootstrap-tq59j" event={"ID":"d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2","Type":"ContainerStarted","Data":"b69584deb1c4b452b1bde1c2430749956ff729d838c27506a89d489f02d94dd9"} Mar 21 04:35:21 crc kubenswrapper[4923]: I0321 04:35:21.975694 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-bootstrap-tq59j" event={"ID":"d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2","Type":"ContainerStarted","Data":"f311eb6b52800b41f7527e9c6340a125f60588c74c2b462097dcf34903d5ae41"} Mar 21 04:35:21 crc kubenswrapper[4923]: I0321 04:35:21.979833 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5" event={"ID":"fc3ac6f5-8825-4713-b073-ae95374bcd0e","Type":"ContainerDied","Data":"3fe57660b5cf5dd2f95d532d846ccf39e124b9a45b09cffd190932f2a347ae7a"} Mar 21 04:35:21 crc kubenswrapper[4923]: I0321 04:35:21.979895 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fe57660b5cf5dd2f95d532d846ccf39e124b9a45b09cffd190932f2a347ae7a" Mar 21 04:35:21 crc kubenswrapper[4923]: I0321 04:35:21.980000 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5" Mar 21 04:35:22 crc kubenswrapper[4923]: I0321 04:35:22.002056 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/keystone-bootstrap-tq59j" podStartSLOduration=2.0020393 podStartE2EDuration="2.0020393s" podCreationTimestamp="2026-03-21 04:35:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:35:22.001475674 +0000 UTC m=+1087.154486791" watchObservedRunningTime="2026-03-21 04:35:22.0020393 +0000 UTC m=+1087.155050387" Mar 21 04:35:24 crc kubenswrapper[4923]: I0321 04:35:24.016272 4923 generic.go:334] "Generic (PLEG): container finished" podID="d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2" containerID="b69584deb1c4b452b1bde1c2430749956ff729d838c27506a89d489f02d94dd9" exitCode=0 Mar 21 04:35:24 crc kubenswrapper[4923]: I0321 04:35:24.016384 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-bootstrap-tq59j" event={"ID":"d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2","Type":"ContainerDied","Data":"b69584deb1c4b452b1bde1c2430749956ff729d838c27506a89d489f02d94dd9"} Mar 21 04:35:25 crc kubenswrapper[4923]: I0321 04:35:25.314635 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-bootstrap-tq59j" Mar 21 04:35:25 crc kubenswrapper[4923]: I0321 04:35:25.423500 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhbv5\" (UniqueName: \"kubernetes.io/projected/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-kube-api-access-nhbv5\") pod \"d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2\" (UID: \"d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2\") " Mar 21 04:35:25 crc kubenswrapper[4923]: I0321 04:35:25.423597 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-config-data\") pod \"d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2\" (UID: \"d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2\") " Mar 21 04:35:25 crc kubenswrapper[4923]: I0321 04:35:25.423619 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-fernet-keys\") pod \"d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2\" (UID: \"d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2\") " Mar 21 04:35:25 crc kubenswrapper[4923]: I0321 04:35:25.423643 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-credential-keys\") pod \"d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2\" (UID: \"d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2\") " Mar 21 04:35:25 crc kubenswrapper[4923]: I0321 04:35:25.423679 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-scripts\") pod \"d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2\" (UID: \"d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2\") " Mar 21 04:35:25 crc kubenswrapper[4923]: I0321 04:35:25.428511 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-kube-api-access-nhbv5" (OuterVolumeSpecName: "kube-api-access-nhbv5") pod "d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2" (UID: "d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2"). InnerVolumeSpecName "kube-api-access-nhbv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:35:25 crc kubenswrapper[4923]: I0321 04:35:25.428682 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2" (UID: "d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:35:25 crc kubenswrapper[4923]: I0321 04:35:25.428718 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-scripts" (OuterVolumeSpecName: "scripts") pod "d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2" (UID: "d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:35:25 crc kubenswrapper[4923]: I0321 04:35:25.429084 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2" (UID: "d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:35:25 crc kubenswrapper[4923]: I0321 04:35:25.443307 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-config-data" (OuterVolumeSpecName: "config-data") pod "d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2" (UID: "d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:35:25 crc kubenswrapper[4923]: I0321 04:35:25.525981 4923 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 04:35:25 crc kubenswrapper[4923]: I0321 04:35:25.526031 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhbv5\" (UniqueName: \"kubernetes.io/projected/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-kube-api-access-nhbv5\") on node \"crc\" DevicePath \"\"" Mar 21 04:35:25 crc kubenswrapper[4923]: I0321 04:35:25.526059 4923 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 04:35:25 crc kubenswrapper[4923]: I0321 04:35:25.526079 4923 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 21 04:35:25 crc kubenswrapper[4923]: I0321 04:35:25.526097 4923 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.042139 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-bootstrap-tq59j" event={"ID":"d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2","Type":"ContainerDied","Data":"f311eb6b52800b41f7527e9c6340a125f60588c74c2b462097dcf34903d5ae41"} Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.042203 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f311eb6b52800b41f7527e9c6340a125f60588c74c2b462097dcf34903d5ae41" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.042303 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-bootstrap-tq59j" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.288468 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/keystone-8598c6cb74-zm4b6"] Mar 21 04:35:26 crc kubenswrapper[4923]: E0321 04:35:26.288790 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc3ac6f5-8825-4713-b073-ae95374bcd0e" containerName="pull" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.288813 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc3ac6f5-8825-4713-b073-ae95374bcd0e" containerName="pull" Mar 21 04:35:26 crc kubenswrapper[4923]: E0321 04:35:26.288827 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc3ac6f5-8825-4713-b073-ae95374bcd0e" containerName="util" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.288838 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc3ac6f5-8825-4713-b073-ae95374bcd0e" containerName="util" Mar 21 04:35:26 crc kubenswrapper[4923]: E0321 04:35:26.288858 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2" containerName="keystone-bootstrap" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.288867 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2" containerName="keystone-bootstrap" Mar 21 04:35:26 crc kubenswrapper[4923]: E0321 04:35:26.288881 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc3ac6f5-8825-4713-b073-ae95374bcd0e" containerName="extract" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.288891 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc3ac6f5-8825-4713-b073-ae95374bcd0e" containerName="extract" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.289049 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2" containerName="keystone-bootstrap" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.289072 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc3ac6f5-8825-4713-b073-ae95374bcd0e" containerName="extract" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.289589 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-8598c6cb74-zm4b6" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.291692 4923 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"keystone-keystone-dockercfg-hwcf7" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.291762 4923 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"keystone" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.291962 4923 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"keystone-scripts" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.292236 4923 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"keystone-config-data" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.299236 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/keystone-8598c6cb74-zm4b6"] Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.439881 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25ef91ae-d440-42d3-b259-8eacbb269ee0-fernet-keys\") pod \"keystone-8598c6cb74-zm4b6\" (UID: \"25ef91ae-d440-42d3-b259-8eacbb269ee0\") " pod="horizon-kuttl-tests/keystone-8598c6cb74-zm4b6" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.439929 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ef91ae-d440-42d3-b259-8eacbb269ee0-config-data\") pod \"keystone-8598c6cb74-zm4b6\" (UID: \"25ef91ae-d440-42d3-b259-8eacbb269ee0\") " pod="horizon-kuttl-tests/keystone-8598c6cb74-zm4b6" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.440001 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9nsg\" (UniqueName: \"kubernetes.io/projected/25ef91ae-d440-42d3-b259-8eacbb269ee0-kube-api-access-p9nsg\") pod \"keystone-8598c6cb74-zm4b6\" (UID: \"25ef91ae-d440-42d3-b259-8eacbb269ee0\") " pod="horizon-kuttl-tests/keystone-8598c6cb74-zm4b6" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.440132 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25ef91ae-d440-42d3-b259-8eacbb269ee0-scripts\") pod \"keystone-8598c6cb74-zm4b6\" (UID: \"25ef91ae-d440-42d3-b259-8eacbb269ee0\") " pod="horizon-kuttl-tests/keystone-8598c6cb74-zm4b6" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.440291 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/25ef91ae-d440-42d3-b259-8eacbb269ee0-credential-keys\") pod \"keystone-8598c6cb74-zm4b6\" (UID: \"25ef91ae-d440-42d3-b259-8eacbb269ee0\") " pod="horizon-kuttl-tests/keystone-8598c6cb74-zm4b6" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.541792 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25ef91ae-d440-42d3-b259-8eacbb269ee0-fernet-keys\") pod \"keystone-8598c6cb74-zm4b6\" (UID: \"25ef91ae-d440-42d3-b259-8eacbb269ee0\") " pod="horizon-kuttl-tests/keystone-8598c6cb74-zm4b6" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.541843 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ef91ae-d440-42d3-b259-8eacbb269ee0-config-data\") pod \"keystone-8598c6cb74-zm4b6\" (UID: \"25ef91ae-d440-42d3-b259-8eacbb269ee0\") " pod="horizon-kuttl-tests/keystone-8598c6cb74-zm4b6" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.541896 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9nsg\" (UniqueName: \"kubernetes.io/projected/25ef91ae-d440-42d3-b259-8eacbb269ee0-kube-api-access-p9nsg\") pod \"keystone-8598c6cb74-zm4b6\" (UID: \"25ef91ae-d440-42d3-b259-8eacbb269ee0\") " pod="horizon-kuttl-tests/keystone-8598c6cb74-zm4b6" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.541939 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25ef91ae-d440-42d3-b259-8eacbb269ee0-scripts\") pod \"keystone-8598c6cb74-zm4b6\" (UID: \"25ef91ae-d440-42d3-b259-8eacbb269ee0\") " pod="horizon-kuttl-tests/keystone-8598c6cb74-zm4b6" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.542023 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/25ef91ae-d440-42d3-b259-8eacbb269ee0-credential-keys\") pod \"keystone-8598c6cb74-zm4b6\" (UID: \"25ef91ae-d440-42d3-b259-8eacbb269ee0\") " pod="horizon-kuttl-tests/keystone-8598c6cb74-zm4b6" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.546263 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/25ef91ae-d440-42d3-b259-8eacbb269ee0-credential-keys\") pod \"keystone-8598c6cb74-zm4b6\" (UID: \"25ef91ae-d440-42d3-b259-8eacbb269ee0\") " pod="horizon-kuttl-tests/keystone-8598c6cb74-zm4b6" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.547761 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ef91ae-d440-42d3-b259-8eacbb269ee0-config-data\") pod \"keystone-8598c6cb74-zm4b6\" (UID: \"25ef91ae-d440-42d3-b259-8eacbb269ee0\") " pod="horizon-kuttl-tests/keystone-8598c6cb74-zm4b6" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.548807 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25ef91ae-d440-42d3-b259-8eacbb269ee0-fernet-keys\") pod \"keystone-8598c6cb74-zm4b6\" (UID: \"25ef91ae-d440-42d3-b259-8eacbb269ee0\") " pod="horizon-kuttl-tests/keystone-8598c6cb74-zm4b6" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.548853 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25ef91ae-d440-42d3-b259-8eacbb269ee0-scripts\") pod \"keystone-8598c6cb74-zm4b6\" (UID: \"25ef91ae-d440-42d3-b259-8eacbb269ee0\") " pod="horizon-kuttl-tests/keystone-8598c6cb74-zm4b6" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.559511 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9nsg\" (UniqueName: \"kubernetes.io/projected/25ef91ae-d440-42d3-b259-8eacbb269ee0-kube-api-access-p9nsg\") pod \"keystone-8598c6cb74-zm4b6\" (UID: \"25ef91ae-d440-42d3-b259-8eacbb269ee0\") " pod="horizon-kuttl-tests/keystone-8598c6cb74-zm4b6" Mar 21 04:35:26 crc kubenswrapper[4923]: I0321 04:35:26.642705 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-8598c6cb74-zm4b6" Mar 21 04:35:27 crc kubenswrapper[4923]: I0321 04:35:27.056550 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/keystone-8598c6cb74-zm4b6"] Mar 21 04:35:28 crc kubenswrapper[4923]: I0321 04:35:28.060414 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-8598c6cb74-zm4b6" event={"ID":"25ef91ae-d440-42d3-b259-8eacbb269ee0","Type":"ContainerStarted","Data":"b361b312c78daefb0aad02be0ac8c9a8844a370f3a89a1e2718f5fc3701d7620"} Mar 21 04:35:29 crc kubenswrapper[4923]: I0321 04:35:29.442500 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6b7c57b7cc-h4dgh"] Mar 21 04:35:29 crc kubenswrapper[4923]: I0321 04:35:29.444975 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6b7c57b7cc-h4dgh" Mar 21 04:35:29 crc kubenswrapper[4923]: I0321 04:35:29.450482 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6b7c57b7cc-h4dgh"] Mar 21 04:35:29 crc kubenswrapper[4923]: I0321 04:35:29.451729 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-service-cert" Mar 21 04:35:29 crc kubenswrapper[4923]: I0321 04:35:29.457968 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-rxptb" Mar 21 04:35:29 crc kubenswrapper[4923]: I0321 04:35:29.591505 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr6hq\" (UniqueName: \"kubernetes.io/projected/19d00725-90e1-4ad2-93b2-b5e71145cd2c-kube-api-access-nr6hq\") pod \"horizon-operator-controller-manager-6b7c57b7cc-h4dgh\" (UID: \"19d00725-90e1-4ad2-93b2-b5e71145cd2c\") " pod="openstack-operators/horizon-operator-controller-manager-6b7c57b7cc-h4dgh" Mar 21 04:35:29 crc kubenswrapper[4923]: I0321 04:35:29.591567 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/19d00725-90e1-4ad2-93b2-b5e71145cd2c-apiservice-cert\") pod \"horizon-operator-controller-manager-6b7c57b7cc-h4dgh\" (UID: \"19d00725-90e1-4ad2-93b2-b5e71145cd2c\") " pod="openstack-operators/horizon-operator-controller-manager-6b7c57b7cc-h4dgh" Mar 21 04:35:29 crc kubenswrapper[4923]: I0321 04:35:29.591616 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/19d00725-90e1-4ad2-93b2-b5e71145cd2c-webhook-cert\") pod \"horizon-operator-controller-manager-6b7c57b7cc-h4dgh\" (UID: \"19d00725-90e1-4ad2-93b2-b5e71145cd2c\") " pod="openstack-operators/horizon-operator-controller-manager-6b7c57b7cc-h4dgh" Mar 21 04:35:29 crc kubenswrapper[4923]: I0321 04:35:29.692356 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr6hq\" (UniqueName: \"kubernetes.io/projected/19d00725-90e1-4ad2-93b2-b5e71145cd2c-kube-api-access-nr6hq\") pod \"horizon-operator-controller-manager-6b7c57b7cc-h4dgh\" (UID: \"19d00725-90e1-4ad2-93b2-b5e71145cd2c\") " pod="openstack-operators/horizon-operator-controller-manager-6b7c57b7cc-h4dgh" Mar 21 04:35:29 crc kubenswrapper[4923]: I0321 04:35:29.692411 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/19d00725-90e1-4ad2-93b2-b5e71145cd2c-apiservice-cert\") pod \"horizon-operator-controller-manager-6b7c57b7cc-h4dgh\" (UID: \"19d00725-90e1-4ad2-93b2-b5e71145cd2c\") " pod="openstack-operators/horizon-operator-controller-manager-6b7c57b7cc-h4dgh" Mar 21 04:35:29 crc kubenswrapper[4923]: I0321 04:35:29.692458 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/19d00725-90e1-4ad2-93b2-b5e71145cd2c-webhook-cert\") pod \"horizon-operator-controller-manager-6b7c57b7cc-h4dgh\" (UID: \"19d00725-90e1-4ad2-93b2-b5e71145cd2c\") " pod="openstack-operators/horizon-operator-controller-manager-6b7c57b7cc-h4dgh" Mar 21 04:35:29 crc kubenswrapper[4923]: I0321 04:35:29.697752 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/19d00725-90e1-4ad2-93b2-b5e71145cd2c-apiservice-cert\") pod \"horizon-operator-controller-manager-6b7c57b7cc-h4dgh\" (UID: \"19d00725-90e1-4ad2-93b2-b5e71145cd2c\") " pod="openstack-operators/horizon-operator-controller-manager-6b7c57b7cc-h4dgh" Mar 21 04:35:29 crc kubenswrapper[4923]: I0321 04:35:29.711965 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/19d00725-90e1-4ad2-93b2-b5e71145cd2c-webhook-cert\") pod \"horizon-operator-controller-manager-6b7c57b7cc-h4dgh\" (UID: \"19d00725-90e1-4ad2-93b2-b5e71145cd2c\") " pod="openstack-operators/horizon-operator-controller-manager-6b7c57b7cc-h4dgh" Mar 21 04:35:29 crc kubenswrapper[4923]: I0321 04:35:29.711995 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr6hq\" (UniqueName: \"kubernetes.io/projected/19d00725-90e1-4ad2-93b2-b5e71145cd2c-kube-api-access-nr6hq\") pod \"horizon-operator-controller-manager-6b7c57b7cc-h4dgh\" (UID: \"19d00725-90e1-4ad2-93b2-b5e71145cd2c\") " pod="openstack-operators/horizon-operator-controller-manager-6b7c57b7cc-h4dgh" Mar 21 04:35:29 crc kubenswrapper[4923]: I0321 04:35:29.760361 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6b7c57b7cc-h4dgh" Mar 21 04:35:30 crc kubenswrapper[4923]: I0321 04:35:30.078164 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-8598c6cb74-zm4b6" event={"ID":"25ef91ae-d440-42d3-b259-8eacbb269ee0","Type":"ContainerStarted","Data":"bef0072c8942a47f961e8b32d5c002c89bced5e3cf201f72c74e2ba70ac1c2c0"} Mar 21 04:35:30 crc kubenswrapper[4923]: I0321 04:35:30.078545 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="horizon-kuttl-tests/keystone-8598c6cb74-zm4b6" Mar 21 04:35:30 crc kubenswrapper[4923]: I0321 04:35:30.095909 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/keystone-8598c6cb74-zm4b6" podStartSLOduration=4.095892087 podStartE2EDuration="4.095892087s" podCreationTimestamp="2026-03-21 04:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:35:30.092612152 +0000 UTC m=+1095.245623259" watchObservedRunningTime="2026-03-21 04:35:30.095892087 +0000 UTC m=+1095.248903174" Mar 21 04:35:30 crc kubenswrapper[4923]: I0321 04:35:30.197770 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6b7c57b7cc-h4dgh"] Mar 21 04:35:30 crc kubenswrapper[4923]: W0321 04:35:30.200809 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19d00725_90e1_4ad2_93b2_b5e71145cd2c.slice/crio-2f34036d132c39e1486742d3cd33ff83435cd29b128c4e78eca5b3a2514e85cf WatchSource:0}: Error finding container 2f34036d132c39e1486742d3cd33ff83435cd29b128c4e78eca5b3a2514e85cf: Status 404 returned error can't find the container with id 2f34036d132c39e1486742d3cd33ff83435cd29b128c4e78eca5b3a2514e85cf Mar 21 04:35:31 crc kubenswrapper[4923]: I0321 04:35:31.083923 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6b7c57b7cc-h4dgh" event={"ID":"19d00725-90e1-4ad2-93b2-b5e71145cd2c","Type":"ContainerStarted","Data":"2f34036d132c39e1486742d3cd33ff83435cd29b128c4e78eca5b3a2514e85cf"} Mar 21 04:35:32 crc kubenswrapper[4923]: I0321 04:35:32.093403 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6b7c57b7cc-h4dgh" event={"ID":"19d00725-90e1-4ad2-93b2-b5e71145cd2c","Type":"ContainerStarted","Data":"fcee0155bef288c395693ec8daa803b950d7a17971884f8b369e5446dbadb6d8"} Mar 21 04:35:32 crc kubenswrapper[4923]: I0321 04:35:32.093736 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6b7c57b7cc-h4dgh" Mar 21 04:35:32 crc kubenswrapper[4923]: I0321 04:35:32.115609 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-6b7c57b7cc-h4dgh" podStartSLOduration=1.456303491 podStartE2EDuration="3.115591261s" podCreationTimestamp="2026-03-21 04:35:29 +0000 UTC" firstStartedPulling="2026-03-21 04:35:30.203828799 +0000 UTC m=+1095.356839886" lastFinishedPulling="2026-03-21 04:35:31.863116549 +0000 UTC m=+1097.016127656" observedRunningTime="2026-03-21 04:35:32.111915155 +0000 UTC m=+1097.264926252" watchObservedRunningTime="2026-03-21 04:35:32.115591261 +0000 UTC m=+1097.268602358" Mar 21 04:35:39 crc kubenswrapper[4923]: I0321 04:35:39.767713 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6b7c57b7cc-h4dgh" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.572422 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/horizon-6675bd755-2vbtj"] Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.575857 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.577667 4923 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"horizon" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.579042 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"horizon-kuttl-tests"/"horizon-scripts" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.580568 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"horizon-kuttl-tests"/"horizon-config-data" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.581376 4923 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"horizon-horizon-dockercfg-6dh96" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.598723 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/horizon-6675bd755-2vbtj"] Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.644896 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/horizon-8bb8556c5-l6t2z"] Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.646110 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.659084 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/horizon-8bb8556c5-l6t2z"] Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.746365 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/05f35701-494b-4f96-9aeb-fe0b69a507d7-config-data\") pod \"horizon-8bb8556c5-l6t2z\" (UID: \"05f35701-494b-4f96-9aeb-fe0b69a507d7\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.746410 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-horizon-secret-key\") pod \"horizon-6675bd755-2vbtj\" (UID: \"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1\") " pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.746444 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/05f35701-494b-4f96-9aeb-fe0b69a507d7-horizon-secret-key\") pod \"horizon-8bb8556c5-l6t2z\" (UID: \"05f35701-494b-4f96-9aeb-fe0b69a507d7\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.746470 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxdvz\" (UniqueName: \"kubernetes.io/projected/05f35701-494b-4f96-9aeb-fe0b69a507d7-kube-api-access-dxdvz\") pod \"horizon-8bb8556c5-l6t2z\" (UID: \"05f35701-494b-4f96-9aeb-fe0b69a507d7\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.746502 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05f35701-494b-4f96-9aeb-fe0b69a507d7-scripts\") pod \"horizon-8bb8556c5-l6t2z\" (UID: \"05f35701-494b-4f96-9aeb-fe0b69a507d7\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.746614 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-logs\") pod \"horizon-6675bd755-2vbtj\" (UID: \"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1\") " pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.746639 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-scripts\") pod \"horizon-6675bd755-2vbtj\" (UID: \"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1\") " pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.746663 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05f35701-494b-4f96-9aeb-fe0b69a507d7-logs\") pod \"horizon-8bb8556c5-l6t2z\" (UID: \"05f35701-494b-4f96-9aeb-fe0b69a507d7\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.746686 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2b7h\" (UniqueName: \"kubernetes.io/projected/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-kube-api-access-q2b7h\") pod \"horizon-6675bd755-2vbtj\" (UID: \"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1\") " pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.746721 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-config-data\") pod \"horizon-6675bd755-2vbtj\" (UID: \"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1\") " pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.848707 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-scripts\") pod \"horizon-6675bd755-2vbtj\" (UID: \"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1\") " pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.848794 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05f35701-494b-4f96-9aeb-fe0b69a507d7-logs\") pod \"horizon-8bb8556c5-l6t2z\" (UID: \"05f35701-494b-4f96-9aeb-fe0b69a507d7\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.848846 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2b7h\" (UniqueName: \"kubernetes.io/projected/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-kube-api-access-q2b7h\") pod \"horizon-6675bd755-2vbtj\" (UID: \"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1\") " pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.848928 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-config-data\") pod \"horizon-6675bd755-2vbtj\" (UID: \"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1\") " pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.848969 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/05f35701-494b-4f96-9aeb-fe0b69a507d7-config-data\") pod \"horizon-8bb8556c5-l6t2z\" (UID: \"05f35701-494b-4f96-9aeb-fe0b69a507d7\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.849000 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-horizon-secret-key\") pod \"horizon-6675bd755-2vbtj\" (UID: \"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1\") " pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.849043 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/05f35701-494b-4f96-9aeb-fe0b69a507d7-horizon-secret-key\") pod \"horizon-8bb8556c5-l6t2z\" (UID: \"05f35701-494b-4f96-9aeb-fe0b69a507d7\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.849081 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxdvz\" (UniqueName: \"kubernetes.io/projected/05f35701-494b-4f96-9aeb-fe0b69a507d7-kube-api-access-dxdvz\") pod \"horizon-8bb8556c5-l6t2z\" (UID: \"05f35701-494b-4f96-9aeb-fe0b69a507d7\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.849122 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05f35701-494b-4f96-9aeb-fe0b69a507d7-scripts\") pod \"horizon-8bb8556c5-l6t2z\" (UID: \"05f35701-494b-4f96-9aeb-fe0b69a507d7\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.849197 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-logs\") pod \"horizon-6675bd755-2vbtj\" (UID: \"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1\") " pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.849990 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05f35701-494b-4f96-9aeb-fe0b69a507d7-logs\") pod \"horizon-8bb8556c5-l6t2z\" (UID: \"05f35701-494b-4f96-9aeb-fe0b69a507d7\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.849997 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-logs\") pod \"horizon-6675bd755-2vbtj\" (UID: \"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1\") " pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.850845 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/05f35701-494b-4f96-9aeb-fe0b69a507d7-config-data\") pod \"horizon-8bb8556c5-l6t2z\" (UID: \"05f35701-494b-4f96-9aeb-fe0b69a507d7\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.851075 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05f35701-494b-4f96-9aeb-fe0b69a507d7-scripts\") pod \"horizon-8bb8556c5-l6t2z\" (UID: \"05f35701-494b-4f96-9aeb-fe0b69a507d7\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.851362 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-scripts\") pod \"horizon-6675bd755-2vbtj\" (UID: \"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1\") " pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.851796 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-config-data\") pod \"horizon-6675bd755-2vbtj\" (UID: \"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1\") " pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.856024 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/05f35701-494b-4f96-9aeb-fe0b69a507d7-horizon-secret-key\") pod \"horizon-8bb8556c5-l6t2z\" (UID: \"05f35701-494b-4f96-9aeb-fe0b69a507d7\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.859908 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-horizon-secret-key\") pod \"horizon-6675bd755-2vbtj\" (UID: \"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1\") " pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.873913 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxdvz\" (UniqueName: \"kubernetes.io/projected/05f35701-494b-4f96-9aeb-fe0b69a507d7-kube-api-access-dxdvz\") pod \"horizon-8bb8556c5-l6t2z\" (UID: \"05f35701-494b-4f96-9aeb-fe0b69a507d7\") " pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.882190 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2b7h\" (UniqueName: \"kubernetes.io/projected/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-kube-api-access-q2b7h\") pod \"horizon-6675bd755-2vbtj\" (UID: \"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1\") " pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.924519 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" Mar 21 04:35:45 crc kubenswrapper[4923]: I0321 04:35:45.973753 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" Mar 21 04:35:46 crc kubenswrapper[4923]: I0321 04:35:46.257962 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/horizon-8bb8556c5-l6t2z"] Mar 21 04:35:46 crc kubenswrapper[4923]: W0321 04:35:46.408636 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d6027a4_b1f8_4c39_bdf1_02a67312c5b1.slice/crio-aaa49692ccbc9e947329d92bb9b3957f2f98729e9ec06d7aaf8bb03b6c6231ef WatchSource:0}: Error finding container aaa49692ccbc9e947329d92bb9b3957f2f98729e9ec06d7aaf8bb03b6c6231ef: Status 404 returned error can't find the container with id aaa49692ccbc9e947329d92bb9b3957f2f98729e9ec06d7aaf8bb03b6c6231ef Mar 21 04:35:46 crc kubenswrapper[4923]: I0321 04:35:46.411274 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/horizon-6675bd755-2vbtj"] Mar 21 04:35:47 crc kubenswrapper[4923]: I0321 04:35:47.245214 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" event={"ID":"05f35701-494b-4f96-9aeb-fe0b69a507d7","Type":"ContainerStarted","Data":"fb38a822f7d5241120b4966f99ce74d5184c38972f266eef41e0391b671e0ac9"} Mar 21 04:35:47 crc kubenswrapper[4923]: I0321 04:35:47.248072 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" event={"ID":"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1","Type":"ContainerStarted","Data":"aaa49692ccbc9e947329d92bb9b3957f2f98729e9ec06d7aaf8bb03b6c6231ef"} Mar 21 04:35:54 crc kubenswrapper[4923]: I0321 04:35:54.601122 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" event={"ID":"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1","Type":"ContainerStarted","Data":"e4b3712cef31a367a7fbf1b2e4ec4efbbdafd02c9f20ad801024984ab7025092"} Mar 21 04:35:54 crc kubenswrapper[4923]: I0321 04:35:54.601682 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" event={"ID":"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1","Type":"ContainerStarted","Data":"c5569f12d2fd2711a281aacedfdb1e9295c9299ba38f1f55eb500777b3e6fc52"} Mar 21 04:35:54 crc kubenswrapper[4923]: I0321 04:35:54.604694 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" event={"ID":"05f35701-494b-4f96-9aeb-fe0b69a507d7","Type":"ContainerStarted","Data":"d0f2a7442c79ccad1b5fbc51f186610fc0497c0f2a9f8bf44b231231fc437af8"} Mar 21 04:35:54 crc kubenswrapper[4923]: I0321 04:35:54.604739 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" event={"ID":"05f35701-494b-4f96-9aeb-fe0b69a507d7","Type":"ContainerStarted","Data":"bbda3c18740499587857b94bca9bb6385c4220536123010d85286cfa08722414"} Mar 21 04:35:54 crc kubenswrapper[4923]: I0321 04:35:54.638886 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" podStartSLOduration=2.073860571 podStartE2EDuration="9.638865436s" podCreationTimestamp="2026-03-21 04:35:45 +0000 UTC" firstStartedPulling="2026-03-21 04:35:46.412073678 +0000 UTC m=+1111.565084775" lastFinishedPulling="2026-03-21 04:35:53.977078523 +0000 UTC m=+1119.130089640" observedRunningTime="2026-03-21 04:35:54.631870635 +0000 UTC m=+1119.784881752" watchObservedRunningTime="2026-03-21 04:35:54.638865436 +0000 UTC m=+1119.791876533" Mar 21 04:35:54 crc kubenswrapper[4923]: I0321 04:35:54.658736 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" podStartSLOduration=1.9493535789999998 podStartE2EDuration="9.658709723s" podCreationTimestamp="2026-03-21 04:35:45 +0000 UTC" firstStartedPulling="2026-03-21 04:35:46.267694368 +0000 UTC m=+1111.420705465" lastFinishedPulling="2026-03-21 04:35:53.977050502 +0000 UTC m=+1119.130061609" observedRunningTime="2026-03-21 04:35:54.655041408 +0000 UTC m=+1119.808052505" watchObservedRunningTime="2026-03-21 04:35:54.658709723 +0000 UTC m=+1119.811720820" Mar 21 04:35:55 crc kubenswrapper[4923]: I0321 04:35:55.924896 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" Mar 21 04:35:55 crc kubenswrapper[4923]: I0321 04:35:55.924948 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" Mar 21 04:35:55 crc kubenswrapper[4923]: I0321 04:35:55.973860 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" Mar 21 04:35:55 crc kubenswrapper[4923]: I0321 04:35:55.973911 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" Mar 21 04:35:58 crc kubenswrapper[4923]: I0321 04:35:58.031037 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="horizon-kuttl-tests/keystone-8598c6cb74-zm4b6" Mar 21 04:36:00 crc kubenswrapper[4923]: I0321 04:36:00.143543 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567796-xlf62"] Mar 21 04:36:00 crc kubenswrapper[4923]: I0321 04:36:00.146447 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567796-xlf62" Mar 21 04:36:00 crc kubenswrapper[4923]: I0321 04:36:00.150648 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8trfl" Mar 21 04:36:00 crc kubenswrapper[4923]: I0321 04:36:00.150951 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 04:36:00 crc kubenswrapper[4923]: I0321 04:36:00.151733 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 04:36:00 crc kubenswrapper[4923]: I0321 04:36:00.155242 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567796-xlf62"] Mar 21 04:36:00 crc kubenswrapper[4923]: I0321 04:36:00.274907 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d7mq\" (UniqueName: \"kubernetes.io/projected/b637d963-84a9-4fee-b457-b4be405728c3-kube-api-access-5d7mq\") pod \"auto-csr-approver-29567796-xlf62\" (UID: \"b637d963-84a9-4fee-b457-b4be405728c3\") " pod="openshift-infra/auto-csr-approver-29567796-xlf62" Mar 21 04:36:00 crc kubenswrapper[4923]: I0321 04:36:00.376649 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d7mq\" (UniqueName: \"kubernetes.io/projected/b637d963-84a9-4fee-b457-b4be405728c3-kube-api-access-5d7mq\") pod \"auto-csr-approver-29567796-xlf62\" (UID: \"b637d963-84a9-4fee-b457-b4be405728c3\") " pod="openshift-infra/auto-csr-approver-29567796-xlf62" Mar 21 04:36:00 crc kubenswrapper[4923]: I0321 04:36:00.411969 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d7mq\" (UniqueName: \"kubernetes.io/projected/b637d963-84a9-4fee-b457-b4be405728c3-kube-api-access-5d7mq\") pod \"auto-csr-approver-29567796-xlf62\" (UID: \"b637d963-84a9-4fee-b457-b4be405728c3\") " pod="openshift-infra/auto-csr-approver-29567796-xlf62" Mar 21 04:36:00 crc kubenswrapper[4923]: I0321 04:36:00.475103 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567796-xlf62" Mar 21 04:36:00 crc kubenswrapper[4923]: I0321 04:36:00.956636 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567796-xlf62"] Mar 21 04:36:00 crc kubenswrapper[4923]: W0321 04:36:00.960651 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb637d963_84a9_4fee_b457_b4be405728c3.slice/crio-0448820b3bc4e9d1eae62ce59e56786f651b34cc8a0bef1873f415e530298d97 WatchSource:0}: Error finding container 0448820b3bc4e9d1eae62ce59e56786f651b34cc8a0bef1873f415e530298d97: Status 404 returned error can't find the container with id 0448820b3bc4e9d1eae62ce59e56786f651b34cc8a0bef1873f415e530298d97 Mar 21 04:36:01 crc kubenswrapper[4923]: I0321 04:36:01.672336 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567796-xlf62" event={"ID":"b637d963-84a9-4fee-b457-b4be405728c3","Type":"ContainerStarted","Data":"0448820b3bc4e9d1eae62ce59e56786f651b34cc8a0bef1873f415e530298d97"} Mar 21 04:36:02 crc kubenswrapper[4923]: I0321 04:36:02.681312 4923 generic.go:334] "Generic (PLEG): container finished" podID="b637d963-84a9-4fee-b457-b4be405728c3" containerID="1d7bdb7189eaefd029af16537a610ffebb2e15977a2937064abc923ac43bba9e" exitCode=0 Mar 21 04:36:02 crc kubenswrapper[4923]: I0321 04:36:02.681492 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567796-xlf62" event={"ID":"b637d963-84a9-4fee-b457-b4be405728c3","Type":"ContainerDied","Data":"1d7bdb7189eaefd029af16537a610ffebb2e15977a2937064abc923ac43bba9e"} Mar 21 04:36:03 crc kubenswrapper[4923]: I0321 04:36:03.236008 4923 patch_prober.go:28] interesting pod/machine-config-daemon-cv5gr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 04:36:03 crc kubenswrapper[4923]: I0321 04:36:03.236112 4923 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 04:36:03 crc kubenswrapper[4923]: I0321 04:36:03.991855 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567796-xlf62" Mar 21 04:36:04 crc kubenswrapper[4923]: I0321 04:36:04.130313 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d7mq\" (UniqueName: \"kubernetes.io/projected/b637d963-84a9-4fee-b457-b4be405728c3-kube-api-access-5d7mq\") pod \"b637d963-84a9-4fee-b457-b4be405728c3\" (UID: \"b637d963-84a9-4fee-b457-b4be405728c3\") " Mar 21 04:36:04 crc kubenswrapper[4923]: I0321 04:36:04.141890 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b637d963-84a9-4fee-b457-b4be405728c3-kube-api-access-5d7mq" (OuterVolumeSpecName: "kube-api-access-5d7mq") pod "b637d963-84a9-4fee-b457-b4be405728c3" (UID: "b637d963-84a9-4fee-b457-b4be405728c3"). InnerVolumeSpecName "kube-api-access-5d7mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:36:04 crc kubenswrapper[4923]: I0321 04:36:04.231694 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5d7mq\" (UniqueName: \"kubernetes.io/projected/b637d963-84a9-4fee-b457-b4be405728c3-kube-api-access-5d7mq\") on node \"crc\" DevicePath \"\"" Mar 21 04:36:04 crc kubenswrapper[4923]: I0321 04:36:04.700987 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567796-xlf62" event={"ID":"b637d963-84a9-4fee-b457-b4be405728c3","Type":"ContainerDied","Data":"0448820b3bc4e9d1eae62ce59e56786f651b34cc8a0bef1873f415e530298d97"} Mar 21 04:36:04 crc kubenswrapper[4923]: I0321 04:36:04.701447 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0448820b3bc4e9d1eae62ce59e56786f651b34cc8a0bef1873f415e530298d97" Mar 21 04:36:04 crc kubenswrapper[4923]: I0321 04:36:04.701286 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567796-xlf62" Mar 21 04:36:05 crc kubenswrapper[4923]: I0321 04:36:05.088264 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567790-qlwn4"] Mar 21 04:36:05 crc kubenswrapper[4923]: I0321 04:36:05.095589 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567790-qlwn4"] Mar 21 04:36:05 crc kubenswrapper[4923]: I0321 04:36:05.927549 4923 prober.go:107] "Probe failed" probeType="Startup" pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" podUID="7d6027a4-b1f8-4c39-bdf1-02a67312c5b1" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.91:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.91:8080: connect: connection refused" Mar 21 04:36:05 crc kubenswrapper[4923]: I0321 04:36:05.977061 4923 prober.go:107] "Probe failed" probeType="Startup" pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" podUID="05f35701-494b-4f96-9aeb-fe0b69a507d7" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.93:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.93:8080: connect: connection refused" Mar 21 04:36:06 crc kubenswrapper[4923]: I0321 04:36:06.369718 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf8a628-e7fd-4829-b620-f1bbea8efd52" path="/var/lib/kubelet/pods/1bf8a628-e7fd-4829-b620-f1bbea8efd52/volumes" Mar 21 04:36:08 crc kubenswrapper[4923]: E0321 04:36:08.029732 4923 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.199:48858->38.102.83.199:37223: write tcp 38.102.83.199:48858->38.102.83.199:37223: write: broken pipe Mar 21 04:36:17 crc kubenswrapper[4923]: I0321 04:36:17.654795 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" Mar 21 04:36:17 crc kubenswrapper[4923]: I0321 04:36:17.824243 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" Mar 21 04:36:19 crc kubenswrapper[4923]: I0321 04:36:19.219460 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" Mar 21 04:36:19 crc kubenswrapper[4923]: I0321 04:36:19.578855 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" Mar 21 04:36:19 crc kubenswrapper[4923]: I0321 04:36:19.643644 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/horizon-6675bd755-2vbtj"] Mar 21 04:36:19 crc kubenswrapper[4923]: I0321 04:36:19.839792 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" podUID="7d6027a4-b1f8-4c39-bdf1-02a67312c5b1" containerName="horizon-log" containerID="cri-o://c5569f12d2fd2711a281aacedfdb1e9295c9299ba38f1f55eb500777b3e6fc52" gracePeriod=30 Mar 21 04:36:19 crc kubenswrapper[4923]: I0321 04:36:19.840768 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" podUID="7d6027a4-b1f8-4c39-bdf1-02a67312c5b1" containerName="horizon" containerID="cri-o://e4b3712cef31a367a7fbf1b2e4ec4efbbdafd02c9f20ad801024984ab7025092" gracePeriod=30 Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.240587 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/horizon-845cfdcdb-d9lmr"] Mar 21 04:36:23 crc kubenswrapper[4923]: E0321 04:36:23.241649 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b637d963-84a9-4fee-b457-b4be405728c3" containerName="oc" Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.241679 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="b637d963-84a9-4fee-b457-b4be405728c3" containerName="oc" Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.241923 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="b637d963-84a9-4fee-b457-b4be405728c3" containerName="oc" Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.243167 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-845cfdcdb-d9lmr" Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.246597 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"horizon-kuttl-tests"/"horizon-policy" Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.265751 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/horizon-845cfdcdb-d9lmr"] Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.330112 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/horizon-845cfdcdb-d9lmr"] Mar 21 04:36:23 crc kubenswrapper[4923]: E0321 04:36:23.330665 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config-data horizon-secret-key kube-api-access-zzsgb logs policy scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="horizon-kuttl-tests/horizon-845cfdcdb-d9lmr" podUID="62652fb3-0fbd-4946-9c93-5855f2a46149" Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.336708 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/horizon-8bb8556c5-l6t2z"] Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.336988 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" podUID="05f35701-494b-4f96-9aeb-fe0b69a507d7" containerName="horizon-log" containerID="cri-o://bbda3c18740499587857b94bca9bb6385c4220536123010d85286cfa08722414" gracePeriod=30 Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.337073 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" podUID="05f35701-494b-4f96-9aeb-fe0b69a507d7" containerName="horizon" containerID="cri-o://d0f2a7442c79ccad1b5fbc51f186610fc0497c0f2a9f8bf44b231231fc437af8" gracePeriod=30 Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.361954 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policy\" (UniqueName: \"kubernetes.io/configmap/62652fb3-0fbd-4946-9c93-5855f2a46149-policy\") pod \"horizon-845cfdcdb-d9lmr\" (UID: \"62652fb3-0fbd-4946-9c93-5855f2a46149\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-d9lmr" Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.362286 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzsgb\" (UniqueName: \"kubernetes.io/projected/62652fb3-0fbd-4946-9c93-5855f2a46149-kube-api-access-zzsgb\") pod \"horizon-845cfdcdb-d9lmr\" (UID: \"62652fb3-0fbd-4946-9c93-5855f2a46149\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-d9lmr" Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.362422 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62652fb3-0fbd-4946-9c93-5855f2a46149-config-data\") pod \"horizon-845cfdcdb-d9lmr\" (UID: \"62652fb3-0fbd-4946-9c93-5855f2a46149\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-d9lmr" Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.362528 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62652fb3-0fbd-4946-9c93-5855f2a46149-scripts\") pod \"horizon-845cfdcdb-d9lmr\" (UID: \"62652fb3-0fbd-4946-9c93-5855f2a46149\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-d9lmr" Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.362639 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/62652fb3-0fbd-4946-9c93-5855f2a46149-horizon-secret-key\") pod \"horizon-845cfdcdb-d9lmr\" (UID: \"62652fb3-0fbd-4946-9c93-5855f2a46149\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-d9lmr" Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.362737 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62652fb3-0fbd-4946-9c93-5855f2a46149-logs\") pod \"horizon-845cfdcdb-d9lmr\" (UID: \"62652fb3-0fbd-4946-9c93-5855f2a46149\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-d9lmr" Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.463586 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/62652fb3-0fbd-4946-9c93-5855f2a46149-horizon-secret-key\") pod \"horizon-845cfdcdb-d9lmr\" (UID: \"62652fb3-0fbd-4946-9c93-5855f2a46149\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-d9lmr" Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.463627 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62652fb3-0fbd-4946-9c93-5855f2a46149-logs\") pod \"horizon-845cfdcdb-d9lmr\" (UID: \"62652fb3-0fbd-4946-9c93-5855f2a46149\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-d9lmr" Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.463672 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"policy\" (UniqueName: \"kubernetes.io/configmap/62652fb3-0fbd-4946-9c93-5855f2a46149-policy\") pod \"horizon-845cfdcdb-d9lmr\" (UID: \"62652fb3-0fbd-4946-9c93-5855f2a46149\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-d9lmr" Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.463725 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzsgb\" (UniqueName: \"kubernetes.io/projected/62652fb3-0fbd-4946-9c93-5855f2a46149-kube-api-access-zzsgb\") pod \"horizon-845cfdcdb-d9lmr\" (UID: \"62652fb3-0fbd-4946-9c93-5855f2a46149\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-d9lmr" Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.463751 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62652fb3-0fbd-4946-9c93-5855f2a46149-config-data\") pod \"horizon-845cfdcdb-d9lmr\" (UID: \"62652fb3-0fbd-4946-9c93-5855f2a46149\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-d9lmr" Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.463771 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62652fb3-0fbd-4946-9c93-5855f2a46149-scripts\") pod \"horizon-845cfdcdb-d9lmr\" (UID: \"62652fb3-0fbd-4946-9c93-5855f2a46149\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-d9lmr" Mar 21 04:36:23 crc kubenswrapper[4923]: E0321 04:36:23.463821 4923 secret.go:188] Couldn't get secret horizon-kuttl-tests/horizon: secret "horizon" not found Mar 21 04:36:23 crc kubenswrapper[4923]: E0321 04:36:23.463941 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62652fb3-0fbd-4946-9c93-5855f2a46149-horizon-secret-key podName:62652fb3-0fbd-4946-9c93-5855f2a46149 nodeName:}" failed. No retries permitted until 2026-03-21 04:36:23.9639095 +0000 UTC m=+1149.116920607 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "horizon-secret-key" (UniqueName: "kubernetes.io/secret/62652fb3-0fbd-4946-9c93-5855f2a46149-horizon-secret-key") pod "horizon-845cfdcdb-d9lmr" (UID: "62652fb3-0fbd-4946-9c93-5855f2a46149") : secret "horizon" not found Mar 21 04:36:23 crc kubenswrapper[4923]: E0321 04:36:23.463856 4923 configmap.go:193] Couldn't get configMap horizon-kuttl-tests/horizon-scripts: configmap "horizon-scripts" not found Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.464123 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62652fb3-0fbd-4946-9c93-5855f2a46149-logs\") pod \"horizon-845cfdcdb-d9lmr\" (UID: \"62652fb3-0fbd-4946-9c93-5855f2a46149\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-d9lmr" Mar 21 04:36:23 crc kubenswrapper[4923]: E0321 04:36:23.464131 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62652fb3-0fbd-4946-9c93-5855f2a46149-scripts podName:62652fb3-0fbd-4946-9c93-5855f2a46149 nodeName:}" failed. No retries permitted until 2026-03-21 04:36:23.964117346 +0000 UTC m=+1149.117128453 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/62652fb3-0fbd-4946-9c93-5855f2a46149-scripts") pod "horizon-845cfdcdb-d9lmr" (UID: "62652fb3-0fbd-4946-9c93-5855f2a46149") : configmap "horizon-scripts" not found Mar 21 04:36:23 crc kubenswrapper[4923]: E0321 04:36:23.464198 4923 configmap.go:193] Couldn't get configMap horizon-kuttl-tests/horizon-config-data: configmap "horizon-config-data" not found Mar 21 04:36:23 crc kubenswrapper[4923]: E0321 04:36:23.464238 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62652fb3-0fbd-4946-9c93-5855f2a46149-config-data podName:62652fb3-0fbd-4946-9c93-5855f2a46149 nodeName:}" failed. No retries permitted until 2026-03-21 04:36:23.964226499 +0000 UTC m=+1149.117237596 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/62652fb3-0fbd-4946-9c93-5855f2a46149-config-data") pod "horizon-845cfdcdb-d9lmr" (UID: "62652fb3-0fbd-4946-9c93-5855f2a46149") : configmap "horizon-config-data" not found Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.464887 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"policy\" (UniqueName: \"kubernetes.io/configmap/62652fb3-0fbd-4946-9c93-5855f2a46149-policy\") pod \"horizon-845cfdcdb-d9lmr\" (UID: \"62652fb3-0fbd-4946-9c93-5855f2a46149\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-d9lmr" Mar 21 04:36:23 crc kubenswrapper[4923]: E0321 04:36:23.469410 4923 projected.go:194] Error preparing data for projected volume kube-api-access-zzsgb for pod horizon-kuttl-tests/horizon-845cfdcdb-d9lmr: failed to fetch token: serviceaccounts "horizon-horizon" not found Mar 21 04:36:23 crc kubenswrapper[4923]: E0321 04:36:23.469452 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62652fb3-0fbd-4946-9c93-5855f2a46149-kube-api-access-zzsgb podName:62652fb3-0fbd-4946-9c93-5855f2a46149 nodeName:}" failed. No retries permitted until 2026-03-21 04:36:23.969441488 +0000 UTC m=+1149.122452575 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zzsgb" (UniqueName: "kubernetes.io/projected/62652fb3-0fbd-4946-9c93-5855f2a46149-kube-api-access-zzsgb") pod "horizon-845cfdcdb-d9lmr" (UID: "62652fb3-0fbd-4946-9c93-5855f2a46149") : failed to fetch token: serviceaccounts "horizon-horizon" not found Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.874121 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-845cfdcdb-d9lmr" Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.884355 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-845cfdcdb-d9lmr" Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.970202 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"policy\" (UniqueName: \"kubernetes.io/configmap/62652fb3-0fbd-4946-9c93-5855f2a46149-policy\") pod \"62652fb3-0fbd-4946-9c93-5855f2a46149\" (UID: \"62652fb3-0fbd-4946-9c93-5855f2a46149\") " Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.970268 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62652fb3-0fbd-4946-9c93-5855f2a46149-logs\") pod \"62652fb3-0fbd-4946-9c93-5855f2a46149\" (UID: \"62652fb3-0fbd-4946-9c93-5855f2a46149\") " Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.970599 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzsgb\" (UniqueName: \"kubernetes.io/projected/62652fb3-0fbd-4946-9c93-5855f2a46149-kube-api-access-zzsgb\") pod \"horizon-845cfdcdb-d9lmr\" (UID: \"62652fb3-0fbd-4946-9c93-5855f2a46149\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-d9lmr" Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.970653 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62652fb3-0fbd-4946-9c93-5855f2a46149-config-data\") pod \"horizon-845cfdcdb-d9lmr\" (UID: \"62652fb3-0fbd-4946-9c93-5855f2a46149\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-d9lmr" Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.970698 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62652fb3-0fbd-4946-9c93-5855f2a46149-scripts\") pod \"horizon-845cfdcdb-d9lmr\" (UID: \"62652fb3-0fbd-4946-9c93-5855f2a46149\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-d9lmr" Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.970775 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/62652fb3-0fbd-4946-9c93-5855f2a46149-horizon-secret-key\") pod \"horizon-845cfdcdb-d9lmr\" (UID: \"62652fb3-0fbd-4946-9c93-5855f2a46149\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-d9lmr" Mar 21 04:36:23 crc kubenswrapper[4923]: E0321 04:36:23.970832 4923 configmap.go:193] Couldn't get configMap horizon-kuttl-tests/horizon-config-data: configmap "horizon-config-data" not found Mar 21 04:36:23 crc kubenswrapper[4923]: E0321 04:36:23.970899 4923 configmap.go:193] Couldn't get configMap horizon-kuttl-tests/horizon-scripts: configmap "horizon-scripts" not found Mar 21 04:36:23 crc kubenswrapper[4923]: E0321 04:36:23.970933 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62652fb3-0fbd-4946-9c93-5855f2a46149-config-data podName:62652fb3-0fbd-4946-9c93-5855f2a46149 nodeName:}" failed. No retries permitted until 2026-03-21 04:36:24.970901264 +0000 UTC m=+1150.123912401 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/62652fb3-0fbd-4946-9c93-5855f2a46149-config-data") pod "horizon-845cfdcdb-d9lmr" (UID: "62652fb3-0fbd-4946-9c93-5855f2a46149") : configmap "horizon-config-data" not found Mar 21 04:36:23 crc kubenswrapper[4923]: E0321 04:36:23.970968 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62652fb3-0fbd-4946-9c93-5855f2a46149-scripts podName:62652fb3-0fbd-4946-9c93-5855f2a46149 nodeName:}" failed. No retries permitted until 2026-03-21 04:36:24.970948655 +0000 UTC m=+1150.123959782 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/62652fb3-0fbd-4946-9c93-5855f2a46149-scripts") pod "horizon-845cfdcdb-d9lmr" (UID: "62652fb3-0fbd-4946-9c93-5855f2a46149") : configmap "horizon-scripts" not found Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.970892 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62652fb3-0fbd-4946-9c93-5855f2a46149-logs" (OuterVolumeSpecName: "logs") pod "62652fb3-0fbd-4946-9c93-5855f2a46149" (UID: "62652fb3-0fbd-4946-9c93-5855f2a46149"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:36:23 crc kubenswrapper[4923]: E0321 04:36:23.971152 4923 secret.go:188] Couldn't get secret horizon-kuttl-tests/horizon: secret "horizon" not found Mar 21 04:36:23 crc kubenswrapper[4923]: E0321 04:36:23.971233 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62652fb3-0fbd-4946-9c93-5855f2a46149-horizon-secret-key podName:62652fb3-0fbd-4946-9c93-5855f2a46149 nodeName:}" failed. No retries permitted until 2026-03-21 04:36:24.971205943 +0000 UTC m=+1150.124217060 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "horizon-secret-key" (UniqueName: "kubernetes.io/secret/62652fb3-0fbd-4946-9c93-5855f2a46149-horizon-secret-key") pod "horizon-845cfdcdb-d9lmr" (UID: "62652fb3-0fbd-4946-9c93-5855f2a46149") : secret "horizon" not found Mar 21 04:36:23 crc kubenswrapper[4923]: I0321 04:36:23.971938 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62652fb3-0fbd-4946-9c93-5855f2a46149-policy" (OuterVolumeSpecName: "policy") pod "62652fb3-0fbd-4946-9c93-5855f2a46149" (UID: "62652fb3-0fbd-4946-9c93-5855f2a46149"). InnerVolumeSpecName "policy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:36:23 crc kubenswrapper[4923]: E0321 04:36:23.975104 4923 projected.go:194] Error preparing data for projected volume kube-api-access-zzsgb for pod horizon-kuttl-tests/horizon-845cfdcdb-d9lmr: failed to fetch token: serviceaccounts "horizon-horizon" not found Mar 21 04:36:23 crc kubenswrapper[4923]: E0321 04:36:23.975235 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62652fb3-0fbd-4946-9c93-5855f2a46149-kube-api-access-zzsgb podName:62652fb3-0fbd-4946-9c93-5855f2a46149 nodeName:}" failed. No retries permitted until 2026-03-21 04:36:24.975202267 +0000 UTC m=+1150.128213394 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-zzsgb" (UniqueName: "kubernetes.io/projected/62652fb3-0fbd-4946-9c93-5855f2a46149-kube-api-access-zzsgb") pod "horizon-845cfdcdb-d9lmr" (UID: "62652fb3-0fbd-4946-9c93-5855f2a46149") : failed to fetch token: serviceaccounts "horizon-horizon" not found Mar 21 04:36:24 crc kubenswrapper[4923]: I0321 04:36:24.071575 4923 reconciler_common.go:293] "Volume detached for volume \"policy\" (UniqueName: \"kubernetes.io/configmap/62652fb3-0fbd-4946-9c93-5855f2a46149-policy\") on node \"crc\" DevicePath \"\"" Mar 21 04:36:24 crc kubenswrapper[4923]: I0321 04:36:24.071832 4923 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62652fb3-0fbd-4946-9c93-5855f2a46149-logs\") on node \"crc\" DevicePath \"\"" Mar 21 04:36:24 crc kubenswrapper[4923]: I0321 04:36:24.881761 4923 generic.go:334] "Generic (PLEG): container finished" podID="7d6027a4-b1f8-4c39-bdf1-02a67312c5b1" containerID="e4b3712cef31a367a7fbf1b2e4ec4efbbdafd02c9f20ad801024984ab7025092" exitCode=0 Mar 21 04:36:24 crc kubenswrapper[4923]: I0321 04:36:24.881811 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" event={"ID":"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1","Type":"ContainerDied","Data":"e4b3712cef31a367a7fbf1b2e4ec4efbbdafd02c9f20ad801024984ab7025092"} Mar 21 04:36:24 crc kubenswrapper[4923]: I0321 04:36:24.881829 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-845cfdcdb-d9lmr" Mar 21 04:36:24 crc kubenswrapper[4923]: I0321 04:36:24.920226 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/horizon-845cfdcdb-d9lmr"] Mar 21 04:36:24 crc kubenswrapper[4923]: I0321 04:36:24.937756 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/horizon-845cfdcdb-d9lmr"] Mar 21 04:36:24 crc kubenswrapper[4923]: I0321 04:36:24.982442 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62652fb3-0fbd-4946-9c93-5855f2a46149-scripts\") pod \"horizon-845cfdcdb-d9lmr\" (UID: \"62652fb3-0fbd-4946-9c93-5855f2a46149\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-d9lmr" Mar 21 04:36:24 crc kubenswrapper[4923]: I0321 04:36:24.982499 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/62652fb3-0fbd-4946-9c93-5855f2a46149-horizon-secret-key\") pod \"horizon-845cfdcdb-d9lmr\" (UID: \"62652fb3-0fbd-4946-9c93-5855f2a46149\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-d9lmr" Mar 21 04:36:24 crc kubenswrapper[4923]: I0321 04:36:24.982590 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzsgb\" (UniqueName: \"kubernetes.io/projected/62652fb3-0fbd-4946-9c93-5855f2a46149-kube-api-access-zzsgb\") pod \"horizon-845cfdcdb-d9lmr\" (UID: \"62652fb3-0fbd-4946-9c93-5855f2a46149\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-d9lmr" Mar 21 04:36:24 crc kubenswrapper[4923]: E0321 04:36:24.982603 4923 configmap.go:193] Couldn't get configMap horizon-kuttl-tests/horizon-scripts: configmap "horizon-scripts" not found Mar 21 04:36:24 crc kubenswrapper[4923]: I0321 04:36:24.982625 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62652fb3-0fbd-4946-9c93-5855f2a46149-config-data\") pod \"horizon-845cfdcdb-d9lmr\" (UID: \"62652fb3-0fbd-4946-9c93-5855f2a46149\") " pod="horizon-kuttl-tests/horizon-845cfdcdb-d9lmr" Mar 21 04:36:24 crc kubenswrapper[4923]: E0321 04:36:24.982674 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62652fb3-0fbd-4946-9c93-5855f2a46149-scripts podName:62652fb3-0fbd-4946-9c93-5855f2a46149 nodeName:}" failed. No retries permitted until 2026-03-21 04:36:26.982656599 +0000 UTC m=+1152.135667686 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/62652fb3-0fbd-4946-9c93-5855f2a46149-scripts") pod "horizon-845cfdcdb-d9lmr" (UID: "62652fb3-0fbd-4946-9c93-5855f2a46149") : configmap "horizon-scripts" not found Mar 21 04:36:24 crc kubenswrapper[4923]: E0321 04:36:24.982729 4923 configmap.go:193] Couldn't get configMap horizon-kuttl-tests/horizon-config-data: configmap "horizon-config-data" not found Mar 21 04:36:24 crc kubenswrapper[4923]: E0321 04:36:24.982733 4923 secret.go:188] Couldn't get secret horizon-kuttl-tests/horizon: secret "horizon" not found Mar 21 04:36:24 crc kubenswrapper[4923]: E0321 04:36:24.982779 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62652fb3-0fbd-4946-9c93-5855f2a46149-config-data podName:62652fb3-0fbd-4946-9c93-5855f2a46149 nodeName:}" failed. No retries permitted until 2026-03-21 04:36:26.982764352 +0000 UTC m=+1152.135775439 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/62652fb3-0fbd-4946-9c93-5855f2a46149-config-data") pod "horizon-845cfdcdb-d9lmr" (UID: "62652fb3-0fbd-4946-9c93-5855f2a46149") : configmap "horizon-config-data" not found Mar 21 04:36:24 crc kubenswrapper[4923]: E0321 04:36:24.982812 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62652fb3-0fbd-4946-9c93-5855f2a46149-horizon-secret-key podName:62652fb3-0fbd-4946-9c93-5855f2a46149 nodeName:}" failed. No retries permitted until 2026-03-21 04:36:26.982792363 +0000 UTC m=+1152.135803450 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "horizon-secret-key" (UniqueName: "kubernetes.io/secret/62652fb3-0fbd-4946-9c93-5855f2a46149-horizon-secret-key") pod "horizon-845cfdcdb-d9lmr" (UID: "62652fb3-0fbd-4946-9c93-5855f2a46149") : secret "horizon" not found Mar 21 04:36:24 crc kubenswrapper[4923]: E0321 04:36:24.986080 4923 projected.go:194] Error preparing data for projected volume kube-api-access-zzsgb for pod horizon-kuttl-tests/horizon-845cfdcdb-d9lmr: failed to fetch token: pod "horizon-845cfdcdb-d9lmr" not found Mar 21 04:36:24 crc kubenswrapper[4923]: E0321 04:36:24.986125 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62652fb3-0fbd-4946-9c93-5855f2a46149-kube-api-access-zzsgb podName:62652fb3-0fbd-4946-9c93-5855f2a46149 nodeName:}" failed. No retries permitted until 2026-03-21 04:36:26.986116498 +0000 UTC m=+1152.139127585 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-zzsgb" (UniqueName: "kubernetes.io/projected/62652fb3-0fbd-4946-9c93-5855f2a46149-kube-api-access-zzsgb") pod "horizon-845cfdcdb-d9lmr" (UID: "62652fb3-0fbd-4946-9c93-5855f2a46149") : failed to fetch token: pod "horizon-845cfdcdb-d9lmr" not found Mar 21 04:36:25 crc kubenswrapper[4923]: I0321 04:36:25.083556 4923 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62652fb3-0fbd-4946-9c93-5855f2a46149-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 04:36:25 crc kubenswrapper[4923]: I0321 04:36:25.083590 4923 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/62652fb3-0fbd-4946-9c93-5855f2a46149-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 21 04:36:25 crc kubenswrapper[4923]: I0321 04:36:25.083605 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzsgb\" (UniqueName: \"kubernetes.io/projected/62652fb3-0fbd-4946-9c93-5855f2a46149-kube-api-access-zzsgb\") on node \"crc\" DevicePath \"\"" Mar 21 04:36:25 crc kubenswrapper[4923]: I0321 04:36:25.083617 4923 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62652fb3-0fbd-4946-9c93-5855f2a46149-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 04:36:25 crc kubenswrapper[4923]: I0321 04:36:25.926685 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" podUID="7d6027a4-b1f8-4c39-bdf1-02a67312c5b1" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.91:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.91:8080: connect: connection refused" Mar 21 04:36:26 crc kubenswrapper[4923]: I0321 04:36:26.366462 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62652fb3-0fbd-4946-9c93-5855f2a46149" path="/var/lib/kubelet/pods/62652fb3-0fbd-4946-9c93-5855f2a46149/volumes" Mar 21 04:36:26 crc kubenswrapper[4923]: I0321 04:36:26.406104 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" podUID="05f35701-494b-4f96-9aeb-fe0b69a507d7" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.93:8080/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:44980->10.217.0.93:8080: read: connection reset by peer" Mar 21 04:36:26 crc kubenswrapper[4923]: I0321 04:36:26.904246 4923 generic.go:334] "Generic (PLEG): container finished" podID="05f35701-494b-4f96-9aeb-fe0b69a507d7" containerID="d0f2a7442c79ccad1b5fbc51f186610fc0497c0f2a9f8bf44b231231fc437af8" exitCode=0 Mar 21 04:36:26 crc kubenswrapper[4923]: I0321 04:36:26.904302 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" event={"ID":"05f35701-494b-4f96-9aeb-fe0b69a507d7","Type":"ContainerDied","Data":"d0f2a7442c79ccad1b5fbc51f186610fc0497c0f2a9f8bf44b231231fc437af8"} Mar 21 04:36:33 crc kubenswrapper[4923]: I0321 04:36:33.235611 4923 patch_prober.go:28] interesting pod/machine-config-daemon-cv5gr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 04:36:33 crc kubenswrapper[4923]: I0321 04:36:33.236228 4923 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 04:36:35 crc kubenswrapper[4923]: I0321 04:36:35.925643 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" podUID="7d6027a4-b1f8-4c39-bdf1-02a67312c5b1" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.91:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.91:8080: connect: connection refused" Mar 21 04:36:35 crc kubenswrapper[4923]: I0321 04:36:35.974566 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" podUID="05f35701-494b-4f96-9aeb-fe0b69a507d7" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.93:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.93:8080: connect: connection refused" Mar 21 04:36:45 crc kubenswrapper[4923]: I0321 04:36:45.927063 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" podUID="7d6027a4-b1f8-4c39-bdf1-02a67312c5b1" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.91:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.91:8080: connect: connection refused" Mar 21 04:36:45 crc kubenswrapper[4923]: I0321 04:36:45.927932 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" Mar 21 04:36:45 crc kubenswrapper[4923]: I0321 04:36:45.974642 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" podUID="05f35701-494b-4f96-9aeb-fe0b69a507d7" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.93:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.93:8080: connect: connection refused" Mar 21 04:36:45 crc kubenswrapper[4923]: I0321 04:36:45.974786 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" Mar 21 04:36:50 crc kubenswrapper[4923]: I0321 04:36:50.125560 4923 generic.go:334] "Generic (PLEG): container finished" podID="7d6027a4-b1f8-4c39-bdf1-02a67312c5b1" containerID="c5569f12d2fd2711a281aacedfdb1e9295c9299ba38f1f55eb500777b3e6fc52" exitCode=137 Mar 21 04:36:50 crc kubenswrapper[4923]: I0321 04:36:50.126046 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" event={"ID":"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1","Type":"ContainerDied","Data":"c5569f12d2fd2711a281aacedfdb1e9295c9299ba38f1f55eb500777b3e6fc52"} Mar 21 04:36:50 crc kubenswrapper[4923]: I0321 04:36:50.262962 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" Mar 21 04:36:50 crc kubenswrapper[4923]: I0321 04:36:50.418697 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-scripts\") pod \"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1\" (UID: \"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1\") " Mar 21 04:36:50 crc kubenswrapper[4923]: I0321 04:36:50.418822 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-horizon-secret-key\") pod \"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1\" (UID: \"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1\") " Mar 21 04:36:50 crc kubenswrapper[4923]: I0321 04:36:50.418864 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2b7h\" (UniqueName: \"kubernetes.io/projected/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-kube-api-access-q2b7h\") pod \"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1\" (UID: \"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1\") " Mar 21 04:36:50 crc kubenswrapper[4923]: I0321 04:36:50.418958 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-logs\") pod \"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1\" (UID: \"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1\") " Mar 21 04:36:50 crc kubenswrapper[4923]: I0321 04:36:50.419034 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-config-data\") pod \"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1\" (UID: \"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1\") " Mar 21 04:36:50 crc kubenswrapper[4923]: I0321 04:36:50.419899 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-logs" (OuterVolumeSpecName: "logs") pod "7d6027a4-b1f8-4c39-bdf1-02a67312c5b1" (UID: "7d6027a4-b1f8-4c39-bdf1-02a67312c5b1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:36:50 crc kubenswrapper[4923]: I0321 04:36:50.427396 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7d6027a4-b1f8-4c39-bdf1-02a67312c5b1" (UID: "7d6027a4-b1f8-4c39-bdf1-02a67312c5b1"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:36:50 crc kubenswrapper[4923]: I0321 04:36:50.433542 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-kube-api-access-q2b7h" (OuterVolumeSpecName: "kube-api-access-q2b7h") pod "7d6027a4-b1f8-4c39-bdf1-02a67312c5b1" (UID: "7d6027a4-b1f8-4c39-bdf1-02a67312c5b1"). InnerVolumeSpecName "kube-api-access-q2b7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:36:50 crc kubenswrapper[4923]: I0321 04:36:50.443887 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-config-data" (OuterVolumeSpecName: "config-data") pod "7d6027a4-b1f8-4c39-bdf1-02a67312c5b1" (UID: "7d6027a4-b1f8-4c39-bdf1-02a67312c5b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:36:50 crc kubenswrapper[4923]: I0321 04:36:50.450741 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-scripts" (OuterVolumeSpecName: "scripts") pod "7d6027a4-b1f8-4c39-bdf1-02a67312c5b1" (UID: "7d6027a4-b1f8-4c39-bdf1-02a67312c5b1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:36:50 crc kubenswrapper[4923]: I0321 04:36:50.521312 4923 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-logs\") on node \"crc\" DevicePath \"\"" Mar 21 04:36:50 crc kubenswrapper[4923]: I0321 04:36:50.521399 4923 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 04:36:50 crc kubenswrapper[4923]: I0321 04:36:50.521418 4923 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 04:36:50 crc kubenswrapper[4923]: I0321 04:36:50.521434 4923 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 21 04:36:50 crc kubenswrapper[4923]: I0321 04:36:50.521452 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2b7h\" (UniqueName: \"kubernetes.io/projected/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1-kube-api-access-q2b7h\") on node \"crc\" DevicePath \"\"" Mar 21 04:36:51 crc kubenswrapper[4923]: I0321 04:36:51.139067 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" event={"ID":"7d6027a4-b1f8-4c39-bdf1-02a67312c5b1","Type":"ContainerDied","Data":"aaa49692ccbc9e947329d92bb9b3957f2f98729e9ec06d7aaf8bb03b6c6231ef"} Mar 21 04:36:51 crc kubenswrapper[4923]: I0321 04:36:51.139149 4923 scope.go:117] "RemoveContainer" containerID="e4b3712cef31a367a7fbf1b2e4ec4efbbdafd02c9f20ad801024984ab7025092" Mar 21 04:36:51 crc kubenswrapper[4923]: I0321 04:36:51.139170 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-6675bd755-2vbtj" Mar 21 04:36:51 crc kubenswrapper[4923]: I0321 04:36:51.185069 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/horizon-6675bd755-2vbtj"] Mar 21 04:36:51 crc kubenswrapper[4923]: I0321 04:36:51.194019 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/horizon-6675bd755-2vbtj"] Mar 21 04:36:51 crc kubenswrapper[4923]: I0321 04:36:51.397091 4923 scope.go:117] "RemoveContainer" containerID="c5569f12d2fd2711a281aacedfdb1e9295c9299ba38f1f55eb500777b3e6fc52" Mar 21 04:36:52 crc kubenswrapper[4923]: I0321 04:36:52.390565 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d6027a4-b1f8-4c39-bdf1-02a67312c5b1" path="/var/lib/kubelet/pods/7d6027a4-b1f8-4c39-bdf1-02a67312c5b1/volumes" Mar 21 04:36:53 crc kubenswrapper[4923]: I0321 04:36:53.773635 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" Mar 21 04:36:53 crc kubenswrapper[4923]: I0321 04:36:53.975529 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/05f35701-494b-4f96-9aeb-fe0b69a507d7-horizon-secret-key\") pod \"05f35701-494b-4f96-9aeb-fe0b69a507d7\" (UID: \"05f35701-494b-4f96-9aeb-fe0b69a507d7\") " Mar 21 04:36:53 crc kubenswrapper[4923]: I0321 04:36:53.977224 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/05f35701-494b-4f96-9aeb-fe0b69a507d7-config-data\") pod \"05f35701-494b-4f96-9aeb-fe0b69a507d7\" (UID: \"05f35701-494b-4f96-9aeb-fe0b69a507d7\") " Mar 21 04:36:53 crc kubenswrapper[4923]: I0321 04:36:53.977559 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05f35701-494b-4f96-9aeb-fe0b69a507d7-scripts\") pod \"05f35701-494b-4f96-9aeb-fe0b69a507d7\" (UID: \"05f35701-494b-4f96-9aeb-fe0b69a507d7\") " Mar 21 04:36:53 crc kubenswrapper[4923]: I0321 04:36:53.977812 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05f35701-494b-4f96-9aeb-fe0b69a507d7-logs\") pod \"05f35701-494b-4f96-9aeb-fe0b69a507d7\" (UID: \"05f35701-494b-4f96-9aeb-fe0b69a507d7\") " Mar 21 04:36:53 crc kubenswrapper[4923]: I0321 04:36:53.978028 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxdvz\" (UniqueName: \"kubernetes.io/projected/05f35701-494b-4f96-9aeb-fe0b69a507d7-kube-api-access-dxdvz\") pod \"05f35701-494b-4f96-9aeb-fe0b69a507d7\" (UID: \"05f35701-494b-4f96-9aeb-fe0b69a507d7\") " Mar 21 04:36:53 crc kubenswrapper[4923]: I0321 04:36:53.978649 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05f35701-494b-4f96-9aeb-fe0b69a507d7-logs" (OuterVolumeSpecName: "logs") pod "05f35701-494b-4f96-9aeb-fe0b69a507d7" (UID: "05f35701-494b-4f96-9aeb-fe0b69a507d7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:36:53 crc kubenswrapper[4923]: I0321 04:36:53.984542 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05f35701-494b-4f96-9aeb-fe0b69a507d7-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "05f35701-494b-4f96-9aeb-fe0b69a507d7" (UID: "05f35701-494b-4f96-9aeb-fe0b69a507d7"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:36:53 crc kubenswrapper[4923]: I0321 04:36:53.986879 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05f35701-494b-4f96-9aeb-fe0b69a507d7-kube-api-access-dxdvz" (OuterVolumeSpecName: "kube-api-access-dxdvz") pod "05f35701-494b-4f96-9aeb-fe0b69a507d7" (UID: "05f35701-494b-4f96-9aeb-fe0b69a507d7"). InnerVolumeSpecName "kube-api-access-dxdvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:36:54 crc kubenswrapper[4923]: I0321 04:36:54.010159 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05f35701-494b-4f96-9aeb-fe0b69a507d7-config-data" (OuterVolumeSpecName: "config-data") pod "05f35701-494b-4f96-9aeb-fe0b69a507d7" (UID: "05f35701-494b-4f96-9aeb-fe0b69a507d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:36:54 crc kubenswrapper[4923]: I0321 04:36:54.023104 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05f35701-494b-4f96-9aeb-fe0b69a507d7-scripts" (OuterVolumeSpecName: "scripts") pod "05f35701-494b-4f96-9aeb-fe0b69a507d7" (UID: "05f35701-494b-4f96-9aeb-fe0b69a507d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:36:54 crc kubenswrapper[4923]: I0321 04:36:54.079913 4923 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/05f35701-494b-4f96-9aeb-fe0b69a507d7-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 21 04:36:54 crc kubenswrapper[4923]: I0321 04:36:54.079970 4923 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/05f35701-494b-4f96-9aeb-fe0b69a507d7-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 04:36:54 crc kubenswrapper[4923]: I0321 04:36:54.079996 4923 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05f35701-494b-4f96-9aeb-fe0b69a507d7-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 04:36:54 crc kubenswrapper[4923]: I0321 04:36:54.080020 4923 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05f35701-494b-4f96-9aeb-fe0b69a507d7-logs\") on node \"crc\" DevicePath \"\"" Mar 21 04:36:54 crc kubenswrapper[4923]: I0321 04:36:54.080043 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxdvz\" (UniqueName: \"kubernetes.io/projected/05f35701-494b-4f96-9aeb-fe0b69a507d7-kube-api-access-dxdvz\") on node \"crc\" DevicePath \"\"" Mar 21 04:36:54 crc kubenswrapper[4923]: I0321 04:36:54.173662 4923 generic.go:334] "Generic (PLEG): container finished" podID="05f35701-494b-4f96-9aeb-fe0b69a507d7" containerID="bbda3c18740499587857b94bca9bb6385c4220536123010d85286cfa08722414" exitCode=137 Mar 21 04:36:54 crc kubenswrapper[4923]: I0321 04:36:54.173728 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" event={"ID":"05f35701-494b-4f96-9aeb-fe0b69a507d7","Type":"ContainerDied","Data":"bbda3c18740499587857b94bca9bb6385c4220536123010d85286cfa08722414"} Mar 21 04:36:54 crc kubenswrapper[4923]: I0321 04:36:54.173771 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" event={"ID":"05f35701-494b-4f96-9aeb-fe0b69a507d7","Type":"ContainerDied","Data":"fb38a822f7d5241120b4966f99ce74d5184c38972f266eef41e0391b671e0ac9"} Mar 21 04:36:54 crc kubenswrapper[4923]: I0321 04:36:54.173818 4923 scope.go:117] "RemoveContainer" containerID="d0f2a7442c79ccad1b5fbc51f186610fc0497c0f2a9f8bf44b231231fc437af8" Mar 21 04:36:54 crc kubenswrapper[4923]: I0321 04:36:54.174233 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-8bb8556c5-l6t2z" Mar 21 04:36:54 crc kubenswrapper[4923]: I0321 04:36:54.223851 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/horizon-8bb8556c5-l6t2z"] Mar 21 04:36:54 crc kubenswrapper[4923]: I0321 04:36:54.232897 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/horizon-8bb8556c5-l6t2z"] Mar 21 04:36:54 crc kubenswrapper[4923]: I0321 04:36:54.370734 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05f35701-494b-4f96-9aeb-fe0b69a507d7" path="/var/lib/kubelet/pods/05f35701-494b-4f96-9aeb-fe0b69a507d7/volumes" Mar 21 04:36:54 crc kubenswrapper[4923]: I0321 04:36:54.372410 4923 scope.go:117] "RemoveContainer" containerID="bbda3c18740499587857b94bca9bb6385c4220536123010d85286cfa08722414" Mar 21 04:36:54 crc kubenswrapper[4923]: I0321 04:36:54.392716 4923 scope.go:117] "RemoveContainer" containerID="d0f2a7442c79ccad1b5fbc51f186610fc0497c0f2a9f8bf44b231231fc437af8" Mar 21 04:36:54 crc kubenswrapper[4923]: E0321 04:36:54.393639 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0f2a7442c79ccad1b5fbc51f186610fc0497c0f2a9f8bf44b231231fc437af8\": container with ID starting with d0f2a7442c79ccad1b5fbc51f186610fc0497c0f2a9f8bf44b231231fc437af8 not found: ID does not exist" containerID="d0f2a7442c79ccad1b5fbc51f186610fc0497c0f2a9f8bf44b231231fc437af8" Mar 21 04:36:54 crc kubenswrapper[4923]: I0321 04:36:54.393675 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0f2a7442c79ccad1b5fbc51f186610fc0497c0f2a9f8bf44b231231fc437af8"} err="failed to get container status \"d0f2a7442c79ccad1b5fbc51f186610fc0497c0f2a9f8bf44b231231fc437af8\": rpc error: code = NotFound desc = could not find container \"d0f2a7442c79ccad1b5fbc51f186610fc0497c0f2a9f8bf44b231231fc437af8\": container with ID starting with d0f2a7442c79ccad1b5fbc51f186610fc0497c0f2a9f8bf44b231231fc437af8 not found: ID does not exist" Mar 21 04:36:54 crc kubenswrapper[4923]: I0321 04:36:54.393698 4923 scope.go:117] "RemoveContainer" containerID="bbda3c18740499587857b94bca9bb6385c4220536123010d85286cfa08722414" Mar 21 04:36:54 crc kubenswrapper[4923]: E0321 04:36:54.394156 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbda3c18740499587857b94bca9bb6385c4220536123010d85286cfa08722414\": container with ID starting with bbda3c18740499587857b94bca9bb6385c4220536123010d85286cfa08722414 not found: ID does not exist" containerID="bbda3c18740499587857b94bca9bb6385c4220536123010d85286cfa08722414" Mar 21 04:36:54 crc kubenswrapper[4923]: I0321 04:36:54.394186 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbda3c18740499587857b94bca9bb6385c4220536123010d85286cfa08722414"} err="failed to get container status \"bbda3c18740499587857b94bca9bb6385c4220536123010d85286cfa08722414\": rpc error: code = NotFound desc = could not find container \"bbda3c18740499587857b94bca9bb6385c4220536123010d85286cfa08722414\": container with ID starting with bbda3c18740499587857b94bca9bb6385c4220536123010d85286cfa08722414 not found: ID does not exist" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.433243 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/horizon-5b545c459d-gfkj7"] Mar 21 04:36:55 crc kubenswrapper[4923]: E0321 04:36:55.433638 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6027a4-b1f8-4c39-bdf1-02a67312c5b1" containerName="horizon" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.433661 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6027a4-b1f8-4c39-bdf1-02a67312c5b1" containerName="horizon" Mar 21 04:36:55 crc kubenswrapper[4923]: E0321 04:36:55.433676 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05f35701-494b-4f96-9aeb-fe0b69a507d7" containerName="horizon-log" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.433687 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="05f35701-494b-4f96-9aeb-fe0b69a507d7" containerName="horizon-log" Mar 21 04:36:55 crc kubenswrapper[4923]: E0321 04:36:55.433708 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05f35701-494b-4f96-9aeb-fe0b69a507d7" containerName="horizon" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.433717 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="05f35701-494b-4f96-9aeb-fe0b69a507d7" containerName="horizon" Mar 21 04:36:55 crc kubenswrapper[4923]: E0321 04:36:55.433728 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6027a4-b1f8-4c39-bdf1-02a67312c5b1" containerName="horizon-log" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.433736 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6027a4-b1f8-4c39-bdf1-02a67312c5b1" containerName="horizon-log" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.433881 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="05f35701-494b-4f96-9aeb-fe0b69a507d7" containerName="horizon" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.433897 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="05f35701-494b-4f96-9aeb-fe0b69a507d7" containerName="horizon-log" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.433911 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d6027a4-b1f8-4c39-bdf1-02a67312c5b1" containerName="horizon" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.433925 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d6027a4-b1f8-4c39-bdf1-02a67312c5b1" containerName="horizon-log" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.434848 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.438376 4923 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"combined-ca-bundle" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.439980 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"horizon-kuttl-tests"/"horizon-config-data" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.440234 4923 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"horizon-horizon-dockercfg-7s2sd" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.440279 4923 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"cert-horizon-svc" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.442359 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"horizon-kuttl-tests"/"horizon-scripts" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.442363 4923 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"horizon" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.463833 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/horizon-5b545c459d-gfkj7"] Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.504103 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/46c77abf-a506-43c4-a9bc-9b8134e23cbe-horizon-tls-certs\") pod \"horizon-5b545c459d-gfkj7\" (UID: \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\") " pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.504229 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t9zm\" (UniqueName: \"kubernetes.io/projected/46c77abf-a506-43c4-a9bc-9b8134e23cbe-kube-api-access-9t9zm\") pod \"horizon-5b545c459d-gfkj7\" (UID: \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\") " pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.504387 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46c77abf-a506-43c4-a9bc-9b8134e23cbe-logs\") pod \"horizon-5b545c459d-gfkj7\" (UID: \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\") " pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.504466 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46c77abf-a506-43c4-a9bc-9b8134e23cbe-config-data\") pod \"horizon-5b545c459d-gfkj7\" (UID: \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\") " pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.504637 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46c77abf-a506-43c4-a9bc-9b8134e23cbe-combined-ca-bundle\") pod \"horizon-5b545c459d-gfkj7\" (UID: \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\") " pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.504696 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46c77abf-a506-43c4-a9bc-9b8134e23cbe-scripts\") pod \"horizon-5b545c459d-gfkj7\" (UID: \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\") " pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.504741 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/46c77abf-a506-43c4-a9bc-9b8134e23cbe-horizon-secret-key\") pod \"horizon-5b545c459d-gfkj7\" (UID: \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\") " pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.541123 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz"] Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.543767 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.605632 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46c77abf-a506-43c4-a9bc-9b8134e23cbe-combined-ca-bundle\") pod \"horizon-5b545c459d-gfkj7\" (UID: \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\") " pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.605688 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46c77abf-a506-43c4-a9bc-9b8134e23cbe-scripts\") pod \"horizon-5b545c459d-gfkj7\" (UID: \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\") " pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.605972 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/46c77abf-a506-43c4-a9bc-9b8134e23cbe-horizon-secret-key\") pod \"horizon-5b545c459d-gfkj7\" (UID: \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\") " pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.606028 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/46c77abf-a506-43c4-a9bc-9b8134e23cbe-horizon-tls-certs\") pod \"horizon-5b545c459d-gfkj7\" (UID: \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\") " pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.606072 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t9zm\" (UniqueName: \"kubernetes.io/projected/46c77abf-a506-43c4-a9bc-9b8134e23cbe-kube-api-access-9t9zm\") pod \"horizon-5b545c459d-gfkj7\" (UID: \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\") " pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.606101 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46c77abf-a506-43c4-a9bc-9b8134e23cbe-logs\") pod \"horizon-5b545c459d-gfkj7\" (UID: \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\") " pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.606122 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46c77abf-a506-43c4-a9bc-9b8134e23cbe-config-data\") pod \"horizon-5b545c459d-gfkj7\" (UID: \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\") " pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.606401 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46c77abf-a506-43c4-a9bc-9b8134e23cbe-scripts\") pod \"horizon-5b545c459d-gfkj7\" (UID: \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\") " pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.606910 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46c77abf-a506-43c4-a9bc-9b8134e23cbe-logs\") pod \"horizon-5b545c459d-gfkj7\" (UID: \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\") " pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.607146 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46c77abf-a506-43c4-a9bc-9b8134e23cbe-config-data\") pod \"horizon-5b545c459d-gfkj7\" (UID: \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\") " pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.610154 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/46c77abf-a506-43c4-a9bc-9b8134e23cbe-horizon-secret-key\") pod \"horizon-5b545c459d-gfkj7\" (UID: \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\") " pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.610182 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz"] Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.611832 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46c77abf-a506-43c4-a9bc-9b8134e23cbe-combined-ca-bundle\") pod \"horizon-5b545c459d-gfkj7\" (UID: \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\") " pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.611910 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/46c77abf-a506-43c4-a9bc-9b8134e23cbe-horizon-tls-certs\") pod \"horizon-5b545c459d-gfkj7\" (UID: \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\") " pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.625597 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t9zm\" (UniqueName: \"kubernetes.io/projected/46c77abf-a506-43c4-a9bc-9b8134e23cbe-kube-api-access-9t9zm\") pod \"horizon-5b545c459d-gfkj7\" (UID: \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\") " pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.706708 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-scripts\") pod \"horizon-579fd4dcd4-2n5qz\" (UID: \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.706770 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-config-data\") pod \"horizon-579fd4dcd4-2n5qz\" (UID: \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.706798 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-horizon-secret-key\") pod \"horizon-579fd4dcd4-2n5qz\" (UID: \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.706832 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-horizon-tls-certs\") pod \"horizon-579fd4dcd4-2n5qz\" (UID: \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.706869 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-combined-ca-bundle\") pod \"horizon-579fd4dcd4-2n5qz\" (UID: \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.706910 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-logs\") pod \"horizon-579fd4dcd4-2n5qz\" (UID: \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.706927 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvbbl\" (UniqueName: \"kubernetes.io/projected/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-kube-api-access-tvbbl\") pod \"horizon-579fd4dcd4-2n5qz\" (UID: \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.755800 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.808398 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-scripts\") pod \"horizon-579fd4dcd4-2n5qz\" (UID: \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.808736 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-config-data\") pod \"horizon-579fd4dcd4-2n5qz\" (UID: \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.808763 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-horizon-secret-key\") pod \"horizon-579fd4dcd4-2n5qz\" (UID: \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.808793 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-horizon-tls-certs\") pod \"horizon-579fd4dcd4-2n5qz\" (UID: \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.808825 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-combined-ca-bundle\") pod \"horizon-579fd4dcd4-2n5qz\" (UID: \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.808862 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvbbl\" (UniqueName: \"kubernetes.io/projected/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-kube-api-access-tvbbl\") pod \"horizon-579fd4dcd4-2n5qz\" (UID: \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.808879 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-logs\") pod \"horizon-579fd4dcd4-2n5qz\" (UID: \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.809259 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-logs\") pod \"horizon-579fd4dcd4-2n5qz\" (UID: \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.809777 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-scripts\") pod \"horizon-579fd4dcd4-2n5qz\" (UID: \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.810802 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-config-data\") pod \"horizon-579fd4dcd4-2n5qz\" (UID: \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.819230 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-combined-ca-bundle\") pod \"horizon-579fd4dcd4-2n5qz\" (UID: \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.819291 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-horizon-tls-certs\") pod \"horizon-579fd4dcd4-2n5qz\" (UID: \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.819610 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-horizon-secret-key\") pod \"horizon-579fd4dcd4-2n5qz\" (UID: \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.832168 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvbbl\" (UniqueName: \"kubernetes.io/projected/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-kube-api-access-tvbbl\") pod \"horizon-579fd4dcd4-2n5qz\" (UID: \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\") " pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:36:55 crc kubenswrapper[4923]: I0321 04:36:55.866747 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:36:56 crc kubenswrapper[4923]: I0321 04:36:56.075975 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz"] Mar 21 04:36:56 crc kubenswrapper[4923]: I0321 04:36:56.190316 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/horizon-5b545c459d-gfkj7"] Mar 21 04:36:56 crc kubenswrapper[4923]: W0321 04:36:56.190816 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46c77abf_a506_43c4_a9bc_9b8134e23cbe.slice/crio-d21f0516c23076f073aa4dfd5e4eab6f5ef67d92dd7b36ec79e3d55a09d6648a WatchSource:0}: Error finding container d21f0516c23076f073aa4dfd5e4eab6f5ef67d92dd7b36ec79e3d55a09d6648a: Status 404 returned error can't find the container with id d21f0516c23076f073aa4dfd5e4eab6f5ef67d92dd7b36ec79e3d55a09d6648a Mar 21 04:36:56 crc kubenswrapper[4923]: I0321 04:36:56.194193 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" event={"ID":"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607","Type":"ContainerStarted","Data":"30828105fff5b6301f130d4b36251e87c6266ce6b48868e5334cfd2b3c52ea5b"} Mar 21 04:36:57 crc kubenswrapper[4923]: I0321 04:36:57.221082 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" event={"ID":"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607","Type":"ContainerStarted","Data":"a2c644282366b0bb6cc010da06dbb10e5a0dacc99937754b757fed563f2ee87a"} Mar 21 04:36:57 crc kubenswrapper[4923]: I0321 04:36:57.221807 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" event={"ID":"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607","Type":"ContainerStarted","Data":"986fa853b157aed857edb08d418f634814e16a8d2e7552474731ce1798a735a6"} Mar 21 04:36:57 crc kubenswrapper[4923]: I0321 04:36:57.227166 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" event={"ID":"46c77abf-a506-43c4-a9bc-9b8134e23cbe","Type":"ContainerStarted","Data":"64f8db1b69f0f7f35fb8aad0936502733810c0201ebc1a1b20fdd5f67d90af59"} Mar 21 04:36:57 crc kubenswrapper[4923]: I0321 04:36:57.227244 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" event={"ID":"46c77abf-a506-43c4-a9bc-9b8134e23cbe","Type":"ContainerStarted","Data":"ff7866063f07fb43f18c8ca7052e22a98cd5e08ce51c2d4a38ed6f8f52045b0d"} Mar 21 04:36:57 crc kubenswrapper[4923]: I0321 04:36:57.227268 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" event={"ID":"46c77abf-a506-43c4-a9bc-9b8134e23cbe","Type":"ContainerStarted","Data":"d21f0516c23076f073aa4dfd5e4eab6f5ef67d92dd7b36ec79e3d55a09d6648a"} Mar 21 04:36:57 crc kubenswrapper[4923]: I0321 04:36:57.258277 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" podStartSLOduration=2.258246579 podStartE2EDuration="2.258246579s" podCreationTimestamp="2026-03-21 04:36:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:36:57.249986074 +0000 UTC m=+1182.402997201" watchObservedRunningTime="2026-03-21 04:36:57.258246579 +0000 UTC m=+1182.411257696" Mar 21 04:36:57 crc kubenswrapper[4923]: I0321 04:36:57.286600 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" podStartSLOduration=2.286569277 podStartE2EDuration="2.286569277s" podCreationTimestamp="2026-03-21 04:36:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:36:57.273647819 +0000 UTC m=+1182.426658946" watchObservedRunningTime="2026-03-21 04:36:57.286569277 +0000 UTC m=+1182.439580414" Mar 21 04:37:03 crc kubenswrapper[4923]: I0321 04:37:03.235662 4923 patch_prober.go:28] interesting pod/machine-config-daemon-cv5gr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 04:37:03 crc kubenswrapper[4923]: I0321 04:37:03.236263 4923 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 04:37:03 crc kubenswrapper[4923]: I0321 04:37:03.236366 4923 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" Mar 21 04:37:03 crc kubenswrapper[4923]: I0321 04:37:03.237181 4923 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ed57872de3a4ec46ca622220eae1ec6afcee8d851a5ca3d72e36b3f3c20665be"} pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 21 04:37:03 crc kubenswrapper[4923]: I0321 04:37:03.237278 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" containerID="cri-o://ed57872de3a4ec46ca622220eae1ec6afcee8d851a5ca3d72e36b3f3c20665be" gracePeriod=600 Mar 21 04:37:04 crc kubenswrapper[4923]: I0321 04:37:04.297783 4923 generic.go:334] "Generic (PLEG): container finished" podID="34cdf206-b121-415c-ae40-21245192e724" containerID="ed57872de3a4ec46ca622220eae1ec6afcee8d851a5ca3d72e36b3f3c20665be" exitCode=0 Mar 21 04:37:04 crc kubenswrapper[4923]: I0321 04:37:04.297851 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" event={"ID":"34cdf206-b121-415c-ae40-21245192e724","Type":"ContainerDied","Data":"ed57872de3a4ec46ca622220eae1ec6afcee8d851a5ca3d72e36b3f3c20665be"} Mar 21 04:37:04 crc kubenswrapper[4923]: I0321 04:37:04.298247 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" event={"ID":"34cdf206-b121-415c-ae40-21245192e724","Type":"ContainerStarted","Data":"fe257518d39f35ad4d416456034fb8a1f94f5ac481d8616c2afbf6e356e0b8a6"} Mar 21 04:37:04 crc kubenswrapper[4923]: I0321 04:37:04.298278 4923 scope.go:117] "RemoveContainer" containerID="861e5e7c19712fc1c95009bcdddeab790be4423ba276affe7ecbcb3c0afdf835" Mar 21 04:37:05 crc kubenswrapper[4923]: I0321 04:37:05.756684 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:37:05 crc kubenswrapper[4923]: I0321 04:37:05.757046 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:37:05 crc kubenswrapper[4923]: I0321 04:37:05.867489 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:37:05 crc kubenswrapper[4923]: I0321 04:37:05.867554 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:37:17 crc kubenswrapper[4923]: I0321 04:37:17.520619 4923 scope.go:117] "RemoveContainer" containerID="b88cdc3dc7e305c8295384aa37a8efda2778164f6d665c228d041f5ebcc00091" Mar 21 04:37:17 crc kubenswrapper[4923]: I0321 04:37:17.562076 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:37:17 crc kubenswrapper[4923]: I0321 04:37:17.695486 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:37:19 crc kubenswrapper[4923]: I0321 04:37:19.130104 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:37:19 crc kubenswrapper[4923]: I0321 04:37:19.272548 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:37:19 crc kubenswrapper[4923]: I0321 04:37:19.356811 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/horizon-5b545c459d-gfkj7"] Mar 21 04:37:19 crc kubenswrapper[4923]: I0321 04:37:19.450385 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" podUID="46c77abf-a506-43c4-a9bc-9b8134e23cbe" containerName="horizon-log" containerID="cri-o://ff7866063f07fb43f18c8ca7052e22a98cd5e08ce51c2d4a38ed6f8f52045b0d" gracePeriod=30 Mar 21 04:37:19 crc kubenswrapper[4923]: I0321 04:37:19.450474 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" podUID="46c77abf-a506-43c4-a9bc-9b8134e23cbe" containerName="horizon" containerID="cri-o://64f8db1b69f0f7f35fb8aad0936502733810c0201ebc1a1b20fdd5f67d90af59" gracePeriod=30 Mar 21 04:37:19 crc kubenswrapper[4923]: I0321 04:37:19.571900 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz"] Mar 21 04:37:19 crc kubenswrapper[4923]: I0321 04:37:19.572106 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" podUID="9678b2e5-c9ce-42d1-a85f-2fcf0cee8607" containerName="horizon-log" containerID="cri-o://986fa853b157aed857edb08d418f634814e16a8d2e7552474731ce1798a735a6" gracePeriod=30 Mar 21 04:37:19 crc kubenswrapper[4923]: I0321 04:37:19.572479 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" podUID="9678b2e5-c9ce-42d1-a85f-2fcf0cee8607" containerName="horizon" containerID="cri-o://a2c644282366b0bb6cc010da06dbb10e5a0dacc99937754b757fed563f2ee87a" gracePeriod=30 Mar 21 04:37:23 crc kubenswrapper[4923]: I0321 04:37:23.493432 4923 generic.go:334] "Generic (PLEG): container finished" podID="46c77abf-a506-43c4-a9bc-9b8134e23cbe" containerID="64f8db1b69f0f7f35fb8aad0936502733810c0201ebc1a1b20fdd5f67d90af59" exitCode=0 Mar 21 04:37:23 crc kubenswrapper[4923]: I0321 04:37:23.493535 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" event={"ID":"46c77abf-a506-43c4-a9bc-9b8134e23cbe","Type":"ContainerDied","Data":"64f8db1b69f0f7f35fb8aad0936502733810c0201ebc1a1b20fdd5f67d90af59"} Mar 21 04:37:23 crc kubenswrapper[4923]: I0321 04:37:23.497049 4923 generic.go:334] "Generic (PLEG): container finished" podID="9678b2e5-c9ce-42d1-a85f-2fcf0cee8607" containerID="a2c644282366b0bb6cc010da06dbb10e5a0dacc99937754b757fed563f2ee87a" exitCode=0 Mar 21 04:37:23 crc kubenswrapper[4923]: I0321 04:37:23.497102 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" event={"ID":"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607","Type":"ContainerDied","Data":"a2c644282366b0bb6cc010da06dbb10e5a0dacc99937754b757fed563f2ee87a"} Mar 21 04:37:25 crc kubenswrapper[4923]: I0321 04:37:25.756623 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" podUID="46c77abf-a506-43c4-a9bc-9b8134e23cbe" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.96:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.96:8443: connect: connection refused" Mar 21 04:37:25 crc kubenswrapper[4923]: I0321 04:37:25.867313 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" podUID="9678b2e5-c9ce-42d1-a85f-2fcf0cee8607" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.97:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.97:8443: connect: connection refused" Mar 21 04:37:35 crc kubenswrapper[4923]: I0321 04:37:35.757710 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" podUID="46c77abf-a506-43c4-a9bc-9b8134e23cbe" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.96:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.96:8443: connect: connection refused" Mar 21 04:37:35 crc kubenswrapper[4923]: I0321 04:37:35.867768 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" podUID="9678b2e5-c9ce-42d1-a85f-2fcf0cee8607" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.97:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.97:8443: connect: connection refused" Mar 21 04:37:45 crc kubenswrapper[4923]: I0321 04:37:45.758161 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" podUID="46c77abf-a506-43c4-a9bc-9b8134e23cbe" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.96:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.96:8443: connect: connection refused" Mar 21 04:37:45 crc kubenswrapper[4923]: I0321 04:37:45.759039 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:37:45 crc kubenswrapper[4923]: I0321 04:37:45.867917 4923 prober.go:107] "Probe failed" probeType="Readiness" pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" podUID="9678b2e5-c9ce-42d1-a85f-2fcf0cee8607" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.97:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.97:8443: connect: connection refused" Mar 21 04:37:45 crc kubenswrapper[4923]: I0321 04:37:45.868089 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:37:49 crc kubenswrapper[4923]: I0321 04:37:49.751270 4923 generic.go:334] "Generic (PLEG): container finished" podID="46c77abf-a506-43c4-a9bc-9b8134e23cbe" containerID="ff7866063f07fb43f18c8ca7052e22a98cd5e08ce51c2d4a38ed6f8f52045b0d" exitCode=137 Mar 21 04:37:49 crc kubenswrapper[4923]: I0321 04:37:49.751386 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" event={"ID":"46c77abf-a506-43c4-a9bc-9b8134e23cbe","Type":"ContainerDied","Data":"ff7866063f07fb43f18c8ca7052e22a98cd5e08ce51c2d4a38ed6f8f52045b0d"} Mar 21 04:37:49 crc kubenswrapper[4923]: I0321 04:37:49.755157 4923 generic.go:334] "Generic (PLEG): container finished" podID="9678b2e5-c9ce-42d1-a85f-2fcf0cee8607" containerID="986fa853b157aed857edb08d418f634814e16a8d2e7552474731ce1798a735a6" exitCode=137 Mar 21 04:37:49 crc kubenswrapper[4923]: I0321 04:37:49.755195 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" event={"ID":"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607","Type":"ContainerDied","Data":"986fa853b157aed857edb08d418f634814e16a8d2e7552474731ce1798a735a6"} Mar 21 04:37:49 crc kubenswrapper[4923]: I0321 04:37:49.894707 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:37:49 crc kubenswrapper[4923]: I0321 04:37:49.934177 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t9zm\" (UniqueName: \"kubernetes.io/projected/46c77abf-a506-43c4-a9bc-9b8134e23cbe-kube-api-access-9t9zm\") pod \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\" (UID: \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\") " Mar 21 04:37:49 crc kubenswrapper[4923]: I0321 04:37:49.934235 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46c77abf-a506-43c4-a9bc-9b8134e23cbe-combined-ca-bundle\") pod \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\" (UID: \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\") " Mar 21 04:37:49 crc kubenswrapper[4923]: I0321 04:37:49.934279 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/46c77abf-a506-43c4-a9bc-9b8134e23cbe-horizon-tls-certs\") pod \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\" (UID: \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\") " Mar 21 04:37:49 crc kubenswrapper[4923]: I0321 04:37:49.934371 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46c77abf-a506-43c4-a9bc-9b8134e23cbe-config-data\") pod \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\" (UID: \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\") " Mar 21 04:37:49 crc kubenswrapper[4923]: I0321 04:37:49.934469 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46c77abf-a506-43c4-a9bc-9b8134e23cbe-logs\") pod \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\" (UID: \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\") " Mar 21 04:37:49 crc kubenswrapper[4923]: I0321 04:37:49.934503 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46c77abf-a506-43c4-a9bc-9b8134e23cbe-scripts\") pod \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\" (UID: \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\") " Mar 21 04:37:49 crc kubenswrapper[4923]: I0321 04:37:49.934608 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/46c77abf-a506-43c4-a9bc-9b8134e23cbe-horizon-secret-key\") pod \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\" (UID: \"46c77abf-a506-43c4-a9bc-9b8134e23cbe\") " Mar 21 04:37:49 crc kubenswrapper[4923]: I0321 04:37:49.934908 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46c77abf-a506-43c4-a9bc-9b8134e23cbe-logs" (OuterVolumeSpecName: "logs") pod "46c77abf-a506-43c4-a9bc-9b8134e23cbe" (UID: "46c77abf-a506-43c4-a9bc-9b8134e23cbe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:37:49 crc kubenswrapper[4923]: I0321 04:37:49.934984 4923 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46c77abf-a506-43c4-a9bc-9b8134e23cbe-logs\") on node \"crc\" DevicePath \"\"" Mar 21 04:37:49 crc kubenswrapper[4923]: I0321 04:37:49.944508 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46c77abf-a506-43c4-a9bc-9b8134e23cbe-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "46c77abf-a506-43c4-a9bc-9b8134e23cbe" (UID: "46c77abf-a506-43c4-a9bc-9b8134e23cbe"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:37:49 crc kubenswrapper[4923]: I0321 04:37:49.944541 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46c77abf-a506-43c4-a9bc-9b8134e23cbe-kube-api-access-9t9zm" (OuterVolumeSpecName: "kube-api-access-9t9zm") pod "46c77abf-a506-43c4-a9bc-9b8134e23cbe" (UID: "46c77abf-a506-43c4-a9bc-9b8134e23cbe"). InnerVolumeSpecName "kube-api-access-9t9zm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:37:49 crc kubenswrapper[4923]: I0321 04:37:49.949680 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:37:49 crc kubenswrapper[4923]: I0321 04:37:49.955562 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46c77abf-a506-43c4-a9bc-9b8134e23cbe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46c77abf-a506-43c4-a9bc-9b8134e23cbe" (UID: "46c77abf-a506-43c4-a9bc-9b8134e23cbe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:37:49 crc kubenswrapper[4923]: I0321 04:37:49.963400 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46c77abf-a506-43c4-a9bc-9b8134e23cbe-scripts" (OuterVolumeSpecName: "scripts") pod "46c77abf-a506-43c4-a9bc-9b8134e23cbe" (UID: "46c77abf-a506-43c4-a9bc-9b8134e23cbe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:37:49 crc kubenswrapper[4923]: I0321 04:37:49.967871 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46c77abf-a506-43c4-a9bc-9b8134e23cbe-config-data" (OuterVolumeSpecName: "config-data") pod "46c77abf-a506-43c4-a9bc-9b8134e23cbe" (UID: "46c77abf-a506-43c4-a9bc-9b8134e23cbe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:37:49 crc kubenswrapper[4923]: I0321 04:37:49.986702 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46c77abf-a506-43c4-a9bc-9b8134e23cbe-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "46c77abf-a506-43c4-a9bc-9b8134e23cbe" (UID: "46c77abf-a506-43c4-a9bc-9b8134e23cbe"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.036318 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-config-data\") pod \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\" (UID: \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\") " Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.036386 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-horizon-secret-key\") pod \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\" (UID: \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\") " Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.036418 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-horizon-tls-certs\") pod \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\" (UID: \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\") " Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.036512 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-combined-ca-bundle\") pod \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\" (UID: \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\") " Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.036535 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-logs\") pod \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\" (UID: \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\") " Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.036560 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-scripts\") pod \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\" (UID: \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\") " Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.036610 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvbbl\" (UniqueName: \"kubernetes.io/projected/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-kube-api-access-tvbbl\") pod \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\" (UID: \"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607\") " Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.036850 4923 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46c77abf-a506-43c4-a9bc-9b8134e23cbe-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.036861 4923 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46c77abf-a506-43c4-a9bc-9b8134e23cbe-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.036870 4923 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/46c77abf-a506-43c4-a9bc-9b8134e23cbe-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.036880 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t9zm\" (UniqueName: \"kubernetes.io/projected/46c77abf-a506-43c4-a9bc-9b8134e23cbe-kube-api-access-9t9zm\") on node \"crc\" DevicePath \"\"" Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.036889 4923 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46c77abf-a506-43c4-a9bc-9b8134e23cbe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.036897 4923 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/46c77abf-a506-43c4-a9bc-9b8134e23cbe-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.037837 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-logs" (OuterVolumeSpecName: "logs") pod "9678b2e5-c9ce-42d1-a85f-2fcf0cee8607" (UID: "9678b2e5-c9ce-42d1-a85f-2fcf0cee8607"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.039721 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-kube-api-access-tvbbl" (OuterVolumeSpecName: "kube-api-access-tvbbl") pod "9678b2e5-c9ce-42d1-a85f-2fcf0cee8607" (UID: "9678b2e5-c9ce-42d1-a85f-2fcf0cee8607"). InnerVolumeSpecName "kube-api-access-tvbbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.039789 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9678b2e5-c9ce-42d1-a85f-2fcf0cee8607" (UID: "9678b2e5-c9ce-42d1-a85f-2fcf0cee8607"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.055311 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-config-data" (OuterVolumeSpecName: "config-data") pod "9678b2e5-c9ce-42d1-a85f-2fcf0cee8607" (UID: "9678b2e5-c9ce-42d1-a85f-2fcf0cee8607"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.068935 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9678b2e5-c9ce-42d1-a85f-2fcf0cee8607" (UID: "9678b2e5-c9ce-42d1-a85f-2fcf0cee8607"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.069084 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-scripts" (OuterVolumeSpecName: "scripts") pod "9678b2e5-c9ce-42d1-a85f-2fcf0cee8607" (UID: "9678b2e5-c9ce-42d1-a85f-2fcf0cee8607"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.084695 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "9678b2e5-c9ce-42d1-a85f-2fcf0cee8607" (UID: "9678b2e5-c9ce-42d1-a85f-2fcf0cee8607"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.138416 4923 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.138481 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvbbl\" (UniqueName: \"kubernetes.io/projected/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-kube-api-access-tvbbl\") on node \"crc\" DevicePath \"\"" Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.138510 4923 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.138537 4923 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.138562 4923 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.138586 4923 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.138609 4923 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607-logs\") on node \"crc\" DevicePath \"\"" Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.770815 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" event={"ID":"46c77abf-a506-43c4-a9bc-9b8134e23cbe","Type":"ContainerDied","Data":"d21f0516c23076f073aa4dfd5e4eab6f5ef67d92dd7b36ec79e3d55a09d6648a"} Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.770913 4923 scope.go:117] "RemoveContainer" containerID="64f8db1b69f0f7f35fb8aad0936502733810c0201ebc1a1b20fdd5f67d90af59" Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.771778 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-5b545c459d-gfkj7" Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.775354 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" event={"ID":"9678b2e5-c9ce-42d1-a85f-2fcf0cee8607","Type":"ContainerDied","Data":"30828105fff5b6301f130d4b36251e87c6266ce6b48868e5334cfd2b3c52ea5b"} Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.776438 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz" Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.828141 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/horizon-5b545c459d-gfkj7"] Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.835518 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/horizon-5b545c459d-gfkj7"] Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.841743 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz"] Mar 21 04:37:50 crc kubenswrapper[4923]: I0321 04:37:50.846429 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/horizon-579fd4dcd4-2n5qz"] Mar 21 04:37:51 crc kubenswrapper[4923]: I0321 04:37:51.019571 4923 scope.go:117] "RemoveContainer" containerID="ff7866063f07fb43f18c8ca7052e22a98cd5e08ce51c2d4a38ed6f8f52045b0d" Mar 21 04:37:51 crc kubenswrapper[4923]: I0321 04:37:51.042082 4923 scope.go:117] "RemoveContainer" containerID="a2c644282366b0bb6cc010da06dbb10e5a0dacc99937754b757fed563f2ee87a" Mar 21 04:37:51 crc kubenswrapper[4923]: I0321 04:37:51.291075 4923 scope.go:117] "RemoveContainer" containerID="986fa853b157aed857edb08d418f634814e16a8d2e7552474731ce1798a735a6" Mar 21 04:37:52 crc kubenswrapper[4923]: I0321 04:37:52.375079 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46c77abf-a506-43c4-a9bc-9b8134e23cbe" path="/var/lib/kubelet/pods/46c77abf-a506-43c4-a9bc-9b8134e23cbe/volumes" Mar 21 04:37:52 crc kubenswrapper[4923]: I0321 04:37:52.376785 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9678b2e5-c9ce-42d1-a85f-2fcf0cee8607" path="/var/lib/kubelet/pods/9678b2e5-c9ce-42d1-a85f-2fcf0cee8607/volumes" Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.375593 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/keystone-bootstrap-tq59j"] Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.383239 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/keystone-db-sync-mh7hb"] Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.394452 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/keystone-db-sync-mh7hb"] Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.400270 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/keystone-8598c6cb74-zm4b6"] Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.400804 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="horizon-kuttl-tests/keystone-8598c6cb74-zm4b6" podUID="25ef91ae-d440-42d3-b259-8eacbb269ee0" containerName="keystone-api" containerID="cri-o://bef0072c8942a47f961e8b32d5c002c89bced5e3cf201f72c74e2ba70ac1c2c0" gracePeriod=30 Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.407241 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/keystone-bootstrap-tq59j"] Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.414175 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/keystone12c3-account-delete-9mvjj"] Mar 21 04:37:57 crc kubenswrapper[4923]: E0321 04:37:57.414605 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9678b2e5-c9ce-42d1-a85f-2fcf0cee8607" containerName="horizon" Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.414625 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="9678b2e5-c9ce-42d1-a85f-2fcf0cee8607" containerName="horizon" Mar 21 04:37:57 crc kubenswrapper[4923]: E0321 04:37:57.414642 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46c77abf-a506-43c4-a9bc-9b8134e23cbe" containerName="horizon-log" Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.414651 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="46c77abf-a506-43c4-a9bc-9b8134e23cbe" containerName="horizon-log" Mar 21 04:37:57 crc kubenswrapper[4923]: E0321 04:37:57.414682 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46c77abf-a506-43c4-a9bc-9b8134e23cbe" containerName="horizon" Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.414691 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="46c77abf-a506-43c4-a9bc-9b8134e23cbe" containerName="horizon" Mar 21 04:37:57 crc kubenswrapper[4923]: E0321 04:37:57.414708 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9678b2e5-c9ce-42d1-a85f-2fcf0cee8607" containerName="horizon-log" Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.414718 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="9678b2e5-c9ce-42d1-a85f-2fcf0cee8607" containerName="horizon-log" Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.414873 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="9678b2e5-c9ce-42d1-a85f-2fcf0cee8607" containerName="horizon" Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.414888 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="46c77abf-a506-43c4-a9bc-9b8134e23cbe" containerName="horizon" Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.414902 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="46c77abf-a506-43c4-a9bc-9b8134e23cbe" containerName="horizon-log" Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.414915 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="9678b2e5-c9ce-42d1-a85f-2fcf0cee8607" containerName="horizon-log" Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.415535 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone12c3-account-delete-9mvjj" Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.423690 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/keystone12c3-account-delete-9mvjj"] Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.458821 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80d61db4-20c9-4ea6-a8d7-435f200abfc4-operator-scripts\") pod \"keystone12c3-account-delete-9mvjj\" (UID: \"80d61db4-20c9-4ea6-a8d7-435f200abfc4\") " pod="horizon-kuttl-tests/keystone12c3-account-delete-9mvjj" Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.459445 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67p5b\" (UniqueName: \"kubernetes.io/projected/80d61db4-20c9-4ea6-a8d7-435f200abfc4-kube-api-access-67p5b\") pod \"keystone12c3-account-delete-9mvjj\" (UID: \"80d61db4-20c9-4ea6-a8d7-435f200abfc4\") " pod="horizon-kuttl-tests/keystone12c3-account-delete-9mvjj" Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.562049 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80d61db4-20c9-4ea6-a8d7-435f200abfc4-operator-scripts\") pod \"keystone12c3-account-delete-9mvjj\" (UID: \"80d61db4-20c9-4ea6-a8d7-435f200abfc4\") " pod="horizon-kuttl-tests/keystone12c3-account-delete-9mvjj" Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.562190 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67p5b\" (UniqueName: \"kubernetes.io/projected/80d61db4-20c9-4ea6-a8d7-435f200abfc4-kube-api-access-67p5b\") pod \"keystone12c3-account-delete-9mvjj\" (UID: \"80d61db4-20c9-4ea6-a8d7-435f200abfc4\") " pod="horizon-kuttl-tests/keystone12c3-account-delete-9mvjj" Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.563114 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80d61db4-20c9-4ea6-a8d7-435f200abfc4-operator-scripts\") pod \"keystone12c3-account-delete-9mvjj\" (UID: \"80d61db4-20c9-4ea6-a8d7-435f200abfc4\") " pod="horizon-kuttl-tests/keystone12c3-account-delete-9mvjj" Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.603141 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67p5b\" (UniqueName: \"kubernetes.io/projected/80d61db4-20c9-4ea6-a8d7-435f200abfc4-kube-api-access-67p5b\") pod \"keystone12c3-account-delete-9mvjj\" (UID: \"80d61db4-20c9-4ea6-a8d7-435f200abfc4\") " pod="horizon-kuttl-tests/keystone12c3-account-delete-9mvjj" Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.743629 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone12c3-account-delete-9mvjj" Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.923078 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/root-account-create-update-8ftdm"] Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.948922 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/root-account-create-update-8ftdm"] Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.954819 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["horizon-kuttl-tests/root-account-create-update-72fxk"] Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.955805 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/root-account-create-update-72fxk" Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.958081 4923 reflector.go:368] Caches populated for *v1.Secret from object-"horizon-kuttl-tests"/"openstack-mariadb-root-db-secret" Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.973407 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/root-account-create-update-72fxk"] Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.983887 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfsv6\" (UniqueName: \"kubernetes.io/projected/01502f16-7a10-48a6-bdd9-a125aa2d2ecf-kube-api-access-bfsv6\") pod \"root-account-create-update-72fxk\" (UID: \"01502f16-7a10-48a6-bdd9-a125aa2d2ecf\") " pod="horizon-kuttl-tests/root-account-create-update-72fxk" Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.983938 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01502f16-7a10-48a6-bdd9-a125aa2d2ecf-operator-scripts\") pod \"root-account-create-update-72fxk\" (UID: \"01502f16-7a10-48a6-bdd9-a125aa2d2ecf\") " pod="horizon-kuttl-tests/root-account-create-update-72fxk" Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.988231 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/openstack-galera-0"] Mar 21 04:37:57 crc kubenswrapper[4923]: I0321 04:37:57.994154 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/openstack-galera-2"] Mar 21 04:37:58 crc kubenswrapper[4923]: I0321 04:37:57.999911 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/openstack-galera-1"] Mar 21 04:37:58 crc kubenswrapper[4923]: I0321 04:37:58.016267 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/root-account-create-update-72fxk"] Mar 21 04:37:58 crc kubenswrapper[4923]: E0321 04:37:58.016768 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-bfsv6 operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="horizon-kuttl-tests/root-account-create-update-72fxk" podUID="01502f16-7a10-48a6-bdd9-a125aa2d2ecf" Mar 21 04:37:58 crc kubenswrapper[4923]: I0321 04:37:58.084577 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfsv6\" (UniqueName: \"kubernetes.io/projected/01502f16-7a10-48a6-bdd9-a125aa2d2ecf-kube-api-access-bfsv6\") pod \"root-account-create-update-72fxk\" (UID: \"01502f16-7a10-48a6-bdd9-a125aa2d2ecf\") " pod="horizon-kuttl-tests/root-account-create-update-72fxk" Mar 21 04:37:58 crc kubenswrapper[4923]: I0321 04:37:58.084640 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01502f16-7a10-48a6-bdd9-a125aa2d2ecf-operator-scripts\") pod \"root-account-create-update-72fxk\" (UID: \"01502f16-7a10-48a6-bdd9-a125aa2d2ecf\") " pod="horizon-kuttl-tests/root-account-create-update-72fxk" Mar 21 04:37:58 crc kubenswrapper[4923]: E0321 04:37:58.084743 4923 configmap.go:193] Couldn't get configMap horizon-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Mar 21 04:37:58 crc kubenswrapper[4923]: E0321 04:37:58.084806 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01502f16-7a10-48a6-bdd9-a125aa2d2ecf-operator-scripts podName:01502f16-7a10-48a6-bdd9-a125aa2d2ecf nodeName:}" failed. No retries permitted until 2026-03-21 04:37:58.584780639 +0000 UTC m=+1243.737791746 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/01502f16-7a10-48a6-bdd9-a125aa2d2ecf-operator-scripts") pod "root-account-create-update-72fxk" (UID: "01502f16-7a10-48a6-bdd9-a125aa2d2ecf") : configmap "openstack-scripts" not found Mar 21 04:37:58 crc kubenswrapper[4923]: E0321 04:37:58.088214 4923 projected.go:194] Error preparing data for projected volume kube-api-access-bfsv6 for pod horizon-kuttl-tests/root-account-create-update-72fxk: failed to fetch token: serviceaccounts "galera-openstack" not found Mar 21 04:37:58 crc kubenswrapper[4923]: E0321 04:37:58.088336 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01502f16-7a10-48a6-bdd9-a125aa2d2ecf-kube-api-access-bfsv6 podName:01502f16-7a10-48a6-bdd9-a125aa2d2ecf nodeName:}" failed. No retries permitted until 2026-03-21 04:37:58.588292108 +0000 UTC m=+1243.741303295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bfsv6" (UniqueName: "kubernetes.io/projected/01502f16-7a10-48a6-bdd9-a125aa2d2ecf-kube-api-access-bfsv6") pod "root-account-create-update-72fxk" (UID: "01502f16-7a10-48a6-bdd9-a125aa2d2ecf") : failed to fetch token: serviceaccounts "galera-openstack" not found Mar 21 04:37:58 crc kubenswrapper[4923]: I0321 04:37:58.114456 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="horizon-kuttl-tests/openstack-galera-2" podUID="216ff735-f76d-413a-bff8-8e0dfd4177c2" containerName="galera" containerID="cri-o://d2d47752695a7418eef6b77def1c5fe7f38251dbcc7064bcdfab145553e44ca2" gracePeriod=30 Mar 21 04:37:58 crc kubenswrapper[4923]: I0321 04:37:58.246458 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/keystone12c3-account-delete-9mvjj"] Mar 21 04:37:58 crc kubenswrapper[4923]: I0321 04:37:58.366534 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7331201d-612f-4047-b9f6-15634dddeebc" path="/var/lib/kubelet/pods/7331201d-612f-4047-b9f6-15634dddeebc/volumes" Mar 21 04:37:58 crc kubenswrapper[4923]: I0321 04:37:58.367306 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82d33d3e-8d71-4a21-bc75-51d7902603ec" path="/var/lib/kubelet/pods/82d33d3e-8d71-4a21-bc75-51d7902603ec/volumes" Mar 21 04:37:58 crc kubenswrapper[4923]: I0321 04:37:58.368064 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2" path="/var/lib/kubelet/pods/d8812d11-f241-4d8e-bc4a-bcb8a13ff7c2/volumes" Mar 21 04:37:58 crc kubenswrapper[4923]: I0321 04:37:58.592484 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfsv6\" (UniqueName: \"kubernetes.io/projected/01502f16-7a10-48a6-bdd9-a125aa2d2ecf-kube-api-access-bfsv6\") pod \"root-account-create-update-72fxk\" (UID: \"01502f16-7a10-48a6-bdd9-a125aa2d2ecf\") " pod="horizon-kuttl-tests/root-account-create-update-72fxk" Mar 21 04:37:58 crc kubenswrapper[4923]: I0321 04:37:58.592878 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01502f16-7a10-48a6-bdd9-a125aa2d2ecf-operator-scripts\") pod \"root-account-create-update-72fxk\" (UID: \"01502f16-7a10-48a6-bdd9-a125aa2d2ecf\") " pod="horizon-kuttl-tests/root-account-create-update-72fxk" Mar 21 04:37:58 crc kubenswrapper[4923]: E0321 04:37:58.593110 4923 configmap.go:193] Couldn't get configMap horizon-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Mar 21 04:37:58 crc kubenswrapper[4923]: E0321 04:37:58.593174 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01502f16-7a10-48a6-bdd9-a125aa2d2ecf-operator-scripts podName:01502f16-7a10-48a6-bdd9-a125aa2d2ecf nodeName:}" failed. No retries permitted until 2026-03-21 04:37:59.593151029 +0000 UTC m=+1244.746162126 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/01502f16-7a10-48a6-bdd9-a125aa2d2ecf-operator-scripts") pod "root-account-create-update-72fxk" (UID: "01502f16-7a10-48a6-bdd9-a125aa2d2ecf") : configmap "openstack-scripts" not found Mar 21 04:37:58 crc kubenswrapper[4923]: E0321 04:37:58.598995 4923 projected.go:194] Error preparing data for projected volume kube-api-access-bfsv6 for pod horizon-kuttl-tests/root-account-create-update-72fxk: failed to fetch token: serviceaccounts "galera-openstack" not found Mar 21 04:37:58 crc kubenswrapper[4923]: E0321 04:37:58.599085 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01502f16-7a10-48a6-bdd9-a125aa2d2ecf-kube-api-access-bfsv6 podName:01502f16-7a10-48a6-bdd9-a125aa2d2ecf nodeName:}" failed. No retries permitted until 2026-03-21 04:37:59.599059536 +0000 UTC m=+1244.752070633 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-bfsv6" (UniqueName: "kubernetes.io/projected/01502f16-7a10-48a6-bdd9-a125aa2d2ecf-kube-api-access-bfsv6") pod "root-account-create-update-72fxk" (UID: "01502f16-7a10-48a6-bdd9-a125aa2d2ecf") : failed to fetch token: serviceaccounts "galera-openstack" not found Mar 21 04:37:58 crc kubenswrapper[4923]: I0321 04:37:58.606502 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/memcached-0"] Mar 21 04:37:58 crc kubenswrapper[4923]: I0321 04:37:58.606703 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="horizon-kuttl-tests/memcached-0" podUID="09f2119f-aa10-497e-bc3b-547681df5eb4" containerName="memcached" containerID="cri-o://b42c65f9194744583b1264e188b905d6c97e4448e0ab27b93d99694bc4fbaa7d" gracePeriod=30 Mar 21 04:37:58 crc kubenswrapper[4923]: I0321 04:37:58.899678 4923 generic.go:334] "Generic (PLEG): container finished" podID="216ff735-f76d-413a-bff8-8e0dfd4177c2" containerID="d2d47752695a7418eef6b77def1c5fe7f38251dbcc7064bcdfab145553e44ca2" exitCode=0 Mar 21 04:37:58 crc kubenswrapper[4923]: I0321 04:37:58.899760 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-2" event={"ID":"216ff735-f76d-413a-bff8-8e0dfd4177c2","Type":"ContainerDied","Data":"d2d47752695a7418eef6b77def1c5fe7f38251dbcc7064bcdfab145553e44ca2"} Mar 21 04:37:58 crc kubenswrapper[4923]: I0321 04:37:58.901101 4923 generic.go:334] "Generic (PLEG): container finished" podID="80d61db4-20c9-4ea6-a8d7-435f200abfc4" containerID="862549dac9dec48804471c14c5088c2685fd509df5671e6d4585bb3af12c1c05" exitCode=1 Mar 21 04:37:58 crc kubenswrapper[4923]: I0321 04:37:58.901157 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone12c3-account-delete-9mvjj" event={"ID":"80d61db4-20c9-4ea6-a8d7-435f200abfc4","Type":"ContainerDied","Data":"862549dac9dec48804471c14c5088c2685fd509df5671e6d4585bb3af12c1c05"} Mar 21 04:37:58 crc kubenswrapper[4923]: I0321 04:37:58.901189 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/root-account-create-update-72fxk" Mar 21 04:37:58 crc kubenswrapper[4923]: I0321 04:37:58.901213 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone12c3-account-delete-9mvjj" event={"ID":"80d61db4-20c9-4ea6-a8d7-435f200abfc4","Type":"ContainerStarted","Data":"b1a3fa65e83c29cce818d090b6835b66259c907abace49d3a54530c5c643c2c1"} Mar 21 04:37:58 crc kubenswrapper[4923]: I0321 04:37:58.901882 4923 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="horizon-kuttl-tests/keystone12c3-account-delete-9mvjj" secret="" err="secret \"galera-openstack-dockercfg-p97f5\" not found" Mar 21 04:37:58 crc kubenswrapper[4923]: I0321 04:37:58.901936 4923 scope.go:117] "RemoveContainer" containerID="862549dac9dec48804471c14c5088c2685fd509df5671e6d4585bb3af12c1c05" Mar 21 04:37:58 crc kubenswrapper[4923]: I0321 04:37:58.925987 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/root-account-create-update-72fxk" Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.033171 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/openstack-galera-2" Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.047739 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["horizon-kuttl-tests/rabbitmq-server-0"] Mar 21 04:37:59 crc kubenswrapper[4923]: E0321 04:37:59.099946 4923 configmap.go:193] Couldn't get configMap horizon-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Mar 21 04:37:59 crc kubenswrapper[4923]: E0321 04:37:59.100041 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80d61db4-20c9-4ea6-a8d7-435f200abfc4-operator-scripts podName:80d61db4-20c9-4ea6-a8d7-435f200abfc4 nodeName:}" failed. No retries permitted until 2026-03-21 04:37:59.600016137 +0000 UTC m=+1244.753027224 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/80d61db4-20c9-4ea6-a8d7-435f200abfc4-operator-scripts") pod "keystone12c3-account-delete-9mvjj" (UID: "80d61db4-20c9-4ea6-a8d7-435f200abfc4") : configmap "openstack-scripts" not found Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.200565 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/216ff735-f76d-413a-bff8-8e0dfd4177c2-config-data-generated\") pod \"216ff735-f76d-413a-bff8-8e0dfd4177c2\" (UID: \"216ff735-f76d-413a-bff8-8e0dfd4177c2\") " Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.200864 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/216ff735-f76d-413a-bff8-8e0dfd4177c2-operator-scripts\") pod \"216ff735-f76d-413a-bff8-8e0dfd4177c2\" (UID: \"216ff735-f76d-413a-bff8-8e0dfd4177c2\") " Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.200898 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjmj6\" (UniqueName: \"kubernetes.io/projected/216ff735-f76d-413a-bff8-8e0dfd4177c2-kube-api-access-gjmj6\") pod \"216ff735-f76d-413a-bff8-8e0dfd4177c2\" (UID: \"216ff735-f76d-413a-bff8-8e0dfd4177c2\") " Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.200914 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/216ff735-f76d-413a-bff8-8e0dfd4177c2-config-data-default\") pod \"216ff735-f76d-413a-bff8-8e0dfd4177c2\" (UID: \"216ff735-f76d-413a-bff8-8e0dfd4177c2\") " Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.200943 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/216ff735-f76d-413a-bff8-8e0dfd4177c2-kolla-config\") pod \"216ff735-f76d-413a-bff8-8e0dfd4177c2\" (UID: \"216ff735-f76d-413a-bff8-8e0dfd4177c2\") " Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.200967 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"216ff735-f76d-413a-bff8-8e0dfd4177c2\" (UID: \"216ff735-f76d-413a-bff8-8e0dfd4177c2\") " Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.201084 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/216ff735-f76d-413a-bff8-8e0dfd4177c2-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "216ff735-f76d-413a-bff8-8e0dfd4177c2" (UID: "216ff735-f76d-413a-bff8-8e0dfd4177c2"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.201286 4923 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/216ff735-f76d-413a-bff8-8e0dfd4177c2-config-data-generated\") on node \"crc\" DevicePath \"\"" Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.202027 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/216ff735-f76d-413a-bff8-8e0dfd4177c2-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "216ff735-f76d-413a-bff8-8e0dfd4177c2" (UID: "216ff735-f76d-413a-bff8-8e0dfd4177c2"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.202183 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/216ff735-f76d-413a-bff8-8e0dfd4177c2-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "216ff735-f76d-413a-bff8-8e0dfd4177c2" (UID: "216ff735-f76d-413a-bff8-8e0dfd4177c2"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.203085 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/216ff735-f76d-413a-bff8-8e0dfd4177c2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "216ff735-f76d-413a-bff8-8e0dfd4177c2" (UID: "216ff735-f76d-413a-bff8-8e0dfd4177c2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.207006 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/216ff735-f76d-413a-bff8-8e0dfd4177c2-kube-api-access-gjmj6" (OuterVolumeSpecName: "kube-api-access-gjmj6") pod "216ff735-f76d-413a-bff8-8e0dfd4177c2" (UID: "216ff735-f76d-413a-bff8-8e0dfd4177c2"). InnerVolumeSpecName "kube-api-access-gjmj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.214107 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "mysql-db") pod "216ff735-f76d-413a-bff8-8e0dfd4177c2" (UID: "216ff735-f76d-413a-bff8-8e0dfd4177c2"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.303212 4923 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/216ff735-f76d-413a-bff8-8e0dfd4177c2-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.303272 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjmj6\" (UniqueName: \"kubernetes.io/projected/216ff735-f76d-413a-bff8-8e0dfd4177c2-kube-api-access-gjmj6\") on node \"crc\" DevicePath \"\"" Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.303294 4923 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/216ff735-f76d-413a-bff8-8e0dfd4177c2-config-data-default\") on node \"crc\" DevicePath \"\"" Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.303312 4923 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/216ff735-f76d-413a-bff8-8e0dfd4177c2-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.303395 4923 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.314525 4923 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.404955 4923 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.463366 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/rabbitmq-server-0"] Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.607653 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfsv6\" (UniqueName: \"kubernetes.io/projected/01502f16-7a10-48a6-bdd9-a125aa2d2ecf-kube-api-access-bfsv6\") pod \"root-account-create-update-72fxk\" (UID: \"01502f16-7a10-48a6-bdd9-a125aa2d2ecf\") " pod="horizon-kuttl-tests/root-account-create-update-72fxk" Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.607739 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01502f16-7a10-48a6-bdd9-a125aa2d2ecf-operator-scripts\") pod \"root-account-create-update-72fxk\" (UID: \"01502f16-7a10-48a6-bdd9-a125aa2d2ecf\") " pod="horizon-kuttl-tests/root-account-create-update-72fxk" Mar 21 04:37:59 crc kubenswrapper[4923]: E0321 04:37:59.607886 4923 configmap.go:193] Couldn't get configMap horizon-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Mar 21 04:37:59 crc kubenswrapper[4923]: E0321 04:37:59.607939 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01502f16-7a10-48a6-bdd9-a125aa2d2ecf-operator-scripts podName:01502f16-7a10-48a6-bdd9-a125aa2d2ecf nodeName:}" failed. No retries permitted until 2026-03-21 04:38:01.607920635 +0000 UTC m=+1246.760931722 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/01502f16-7a10-48a6-bdd9-a125aa2d2ecf-operator-scripts") pod "root-account-create-update-72fxk" (UID: "01502f16-7a10-48a6-bdd9-a125aa2d2ecf") : configmap "openstack-scripts" not found Mar 21 04:37:59 crc kubenswrapper[4923]: E0321 04:37:59.608035 4923 configmap.go:193] Couldn't get configMap horizon-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Mar 21 04:37:59 crc kubenswrapper[4923]: E0321 04:37:59.608096 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80d61db4-20c9-4ea6-a8d7-435f200abfc4-operator-scripts podName:80d61db4-20c9-4ea6-a8d7-435f200abfc4 nodeName:}" failed. No retries permitted until 2026-03-21 04:38:00.608079229 +0000 UTC m=+1245.761090356 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/80d61db4-20c9-4ea6-a8d7-435f200abfc4-operator-scripts") pod "keystone12c3-account-delete-9mvjj" (UID: "80d61db4-20c9-4ea6-a8d7-435f200abfc4") : configmap "openstack-scripts" not found Mar 21 04:37:59 crc kubenswrapper[4923]: E0321 04:37:59.610935 4923 projected.go:194] Error preparing data for projected volume kube-api-access-bfsv6 for pod horizon-kuttl-tests/root-account-create-update-72fxk: failed to fetch token: serviceaccounts "galera-openstack" not found Mar 21 04:37:59 crc kubenswrapper[4923]: E0321 04:37:59.610977 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01502f16-7a10-48a6-bdd9-a125aa2d2ecf-kube-api-access-bfsv6 podName:01502f16-7a10-48a6-bdd9-a125aa2d2ecf nodeName:}" failed. No retries permitted until 2026-03-21 04:38:01.610967971 +0000 UTC m=+1246.763979048 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bfsv6" (UniqueName: "kubernetes.io/projected/01502f16-7a10-48a6-bdd9-a125aa2d2ecf-kube-api-access-bfsv6") pod "root-account-create-update-72fxk" (UID: "01502f16-7a10-48a6-bdd9-a125aa2d2ecf") : failed to fetch token: serviceaccounts "galera-openstack" not found Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.913625 4923 generic.go:334] "Generic (PLEG): container finished" podID="80d61db4-20c9-4ea6-a8d7-435f200abfc4" containerID="2420d900b3eea837ab2360c14eefe9a4493c119861a43609bfcdae7560be47b2" exitCode=1 Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.913752 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone12c3-account-delete-9mvjj" event={"ID":"80d61db4-20c9-4ea6-a8d7-435f200abfc4","Type":"ContainerDied","Data":"2420d900b3eea837ab2360c14eefe9a4493c119861a43609bfcdae7560be47b2"} Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.913914 4923 scope.go:117] "RemoveContainer" containerID="862549dac9dec48804471c14c5088c2685fd509df5671e6d4585bb3af12c1c05" Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.914496 4923 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="horizon-kuttl-tests/keystone12c3-account-delete-9mvjj" secret="" err="secret \"galera-openstack-dockercfg-p97f5\" not found" Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.914572 4923 scope.go:117] "RemoveContainer" containerID="2420d900b3eea837ab2360c14eefe9a4493c119861a43609bfcdae7560be47b2" Mar 21 04:37:59 crc kubenswrapper[4923]: E0321 04:37:59.915184 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-delete\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-delete pod=keystone12c3-account-delete-9mvjj_horizon-kuttl-tests(80d61db4-20c9-4ea6-a8d7-435f200abfc4)\"" pod="horizon-kuttl-tests/keystone12c3-account-delete-9mvjj" podUID="80d61db4-20c9-4ea6-a8d7-435f200abfc4" Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.920575 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/root-account-create-update-72fxk" Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.920550 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-2" event={"ID":"216ff735-f76d-413a-bff8-8e0dfd4177c2","Type":"ContainerDied","Data":"3eb573dcfc7a53e006b2505306542ebc524eeb2d62caf3aa4c6afa2d8ebe485f"} Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.921006 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/openstack-galera-2" Mar 21 04:37:59 crc kubenswrapper[4923]: I0321 04:37:59.980301 4923 scope.go:117] "RemoveContainer" containerID="d2d47752695a7418eef6b77def1c5fe7f38251dbcc7064bcdfab145553e44ca2" Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.011609 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/root-account-create-update-72fxk"] Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.018285 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/root-account-create-update-72fxk"] Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.023955 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/openstack-galera-2"] Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.029054 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/openstack-galera-2"] Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.032782 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="horizon-kuttl-tests/rabbitmq-server-0" podUID="9701db56-65b1-4cee-8942-69fc9cc4e7b8" containerName="rabbitmq" containerID="cri-o://ddc219a38dcfdaa6c3463b79bf77a10620190660b9d64e171b34ced1fe9f8fcd" gracePeriod=604800 Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.054457 4923 scope.go:117] "RemoveContainer" containerID="6e3a53b71ecefaa546e459f718049404f28b24151fee67a788c66a173476317b" Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.118534 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfsv6\" (UniqueName: \"kubernetes.io/projected/01502f16-7a10-48a6-bdd9-a125aa2d2ecf-kube-api-access-bfsv6\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.118584 4923 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01502f16-7a10-48a6-bdd9-a125aa2d2ecf-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.129626 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567798-ddkkh"] Mar 21 04:38:00 crc kubenswrapper[4923]: E0321 04:38:00.130012 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216ff735-f76d-413a-bff8-8e0dfd4177c2" containerName="mysql-bootstrap" Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.130033 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="216ff735-f76d-413a-bff8-8e0dfd4177c2" containerName="mysql-bootstrap" Mar 21 04:38:00 crc kubenswrapper[4923]: E0321 04:38:00.130073 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216ff735-f76d-413a-bff8-8e0dfd4177c2" containerName="galera" Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.130084 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="216ff735-f76d-413a-bff8-8e0dfd4177c2" containerName="galera" Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.130264 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="216ff735-f76d-413a-bff8-8e0dfd4177c2" containerName="galera" Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.130966 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567798-ddkkh" Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.133015 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8trfl" Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.133078 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.134910 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.141441 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567798-ddkkh"] Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.178752 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="horizon-kuttl-tests/openstack-galera-1" podUID="7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d" containerName="galera" containerID="cri-o://558281794c538bc786bd95ab7ca67e12cc9cd51b4f5f3788849264ef7aabda98" gracePeriod=28 Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.320829 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkj6j\" (UniqueName: \"kubernetes.io/projected/3134e8ba-8705-4780-aa7f-5a644e3949ee-kube-api-access-vkj6j\") pod \"auto-csr-approver-29567798-ddkkh\" (UID: \"3134e8ba-8705-4780-aa7f-5a644e3949ee\") " pod="openshift-infra/auto-csr-approver-29567798-ddkkh" Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.365682 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01502f16-7a10-48a6-bdd9-a125aa2d2ecf" path="/var/lib/kubelet/pods/01502f16-7a10-48a6-bdd9-a125aa2d2ecf/volumes" Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.366047 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="216ff735-f76d-413a-bff8-8e0dfd4177c2" path="/var/lib/kubelet/pods/216ff735-f76d-413a-bff8-8e0dfd4177c2/volumes" Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.414191 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6b7c57b7cc-h4dgh"] Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.414411 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/horizon-operator-controller-manager-6b7c57b7cc-h4dgh" podUID="19d00725-90e1-4ad2-93b2-b5e71145cd2c" containerName="manager" containerID="cri-o://fcee0155bef288c395693ec8daa803b950d7a17971884f8b369e5446dbadb6d8" gracePeriod=10 Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.422074 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkj6j\" (UniqueName: \"kubernetes.io/projected/3134e8ba-8705-4780-aa7f-5a644e3949ee-kube-api-access-vkj6j\") pod \"auto-csr-approver-29567798-ddkkh\" (UID: \"3134e8ba-8705-4780-aa7f-5a644e3949ee\") " pod="openshift-infra/auto-csr-approver-29567798-ddkkh" Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.443070 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkj6j\" (UniqueName: \"kubernetes.io/projected/3134e8ba-8705-4780-aa7f-5a644e3949ee-kube-api-access-vkj6j\") pod \"auto-csr-approver-29567798-ddkkh\" (UID: \"3134e8ba-8705-4780-aa7f-5a644e3949ee\") " pod="openshift-infra/auto-csr-approver-29567798-ddkkh" Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.449008 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567798-ddkkh" Mar 21 04:38:00 crc kubenswrapper[4923]: E0321 04:38:00.628357 4923 configmap.go:193] Couldn't get configMap horizon-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Mar 21 04:38:00 crc kubenswrapper[4923]: E0321 04:38:00.628970 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80d61db4-20c9-4ea6-a8d7-435f200abfc4-operator-scripts podName:80d61db4-20c9-4ea6-a8d7-435f200abfc4 nodeName:}" failed. No retries permitted until 2026-03-21 04:38:02.628646338 +0000 UTC m=+1247.781657415 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/80d61db4-20c9-4ea6-a8d7-435f200abfc4-operator-scripts") pod "keystone12c3-account-delete-9mvjj" (UID: "80d61db4-20c9-4ea6-a8d7-435f200abfc4") : configmap "openstack-scripts" not found Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.691436 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/horizon-operator-index-zldnz"] Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.691637 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/horizon-operator-index-zldnz" podUID="fa331624-aac7-4683-abe0-7e0f37b4b121" containerName="registry-server" containerID="cri-o://28aa09ba1ea8eee400bbbd93f70337c969b23d60b776918623fe6be8b69371fb" gracePeriod=30 Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.732407 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5"] Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.742637 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/e5928313f4c155117cb7da8d3bee189b1206f90e016ca2a3b77070b737chsb5"] Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.910164 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/memcached-0" Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.932306 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/09f2119f-aa10-497e-bc3b-547681df5eb4-config-data\") pod \"09f2119f-aa10-497e-bc3b-547681df5eb4\" (UID: \"09f2119f-aa10-497e-bc3b-547681df5eb4\") " Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.932514 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/09f2119f-aa10-497e-bc3b-547681df5eb4-kolla-config\") pod \"09f2119f-aa10-497e-bc3b-547681df5eb4\" (UID: \"09f2119f-aa10-497e-bc3b-547681df5eb4\") " Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.932591 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6vd8\" (UniqueName: \"kubernetes.io/projected/09f2119f-aa10-497e-bc3b-547681df5eb4-kube-api-access-c6vd8\") pod \"09f2119f-aa10-497e-bc3b-547681df5eb4\" (UID: \"09f2119f-aa10-497e-bc3b-547681df5eb4\") " Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.935778 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09f2119f-aa10-497e-bc3b-547681df5eb4-config-data" (OuterVolumeSpecName: "config-data") pod "09f2119f-aa10-497e-bc3b-547681df5eb4" (UID: "09f2119f-aa10-497e-bc3b-547681df5eb4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.935787 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09f2119f-aa10-497e-bc3b-547681df5eb4-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "09f2119f-aa10-497e-bc3b-547681df5eb4" (UID: "09f2119f-aa10-497e-bc3b-547681df5eb4"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.941552 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09f2119f-aa10-497e-bc3b-547681df5eb4-kube-api-access-c6vd8" (OuterVolumeSpecName: "kube-api-access-c6vd8") pod "09f2119f-aa10-497e-bc3b-547681df5eb4" (UID: "09f2119f-aa10-497e-bc3b-547681df5eb4"). InnerVolumeSpecName "kube-api-access-c6vd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.945085 4923 generic.go:334] "Generic (PLEG): container finished" podID="fa331624-aac7-4683-abe0-7e0f37b4b121" containerID="28aa09ba1ea8eee400bbbd93f70337c969b23d60b776918623fe6be8b69371fb" exitCode=0 Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.945146 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-index-zldnz" event={"ID":"fa331624-aac7-4683-abe0-7e0f37b4b121","Type":"ContainerDied","Data":"28aa09ba1ea8eee400bbbd93f70337c969b23d60b776918623fe6be8b69371fb"} Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.946678 4923 generic.go:334] "Generic (PLEG): container finished" podID="09f2119f-aa10-497e-bc3b-547681df5eb4" containerID="b42c65f9194744583b1264e188b905d6c97e4448e0ab27b93d99694bc4fbaa7d" exitCode=0 Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.946723 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/memcached-0" event={"ID":"09f2119f-aa10-497e-bc3b-547681df5eb4","Type":"ContainerDied","Data":"b42c65f9194744583b1264e188b905d6c97e4448e0ab27b93d99694bc4fbaa7d"} Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.946743 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/memcached-0" event={"ID":"09f2119f-aa10-497e-bc3b-547681df5eb4","Type":"ContainerDied","Data":"6d794cf48ce6618eb5d093ad5d0e1a20cf3e36f228da2d93ae004a98ae072dab"} Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.946765 4923 scope.go:117] "RemoveContainer" containerID="b42c65f9194744583b1264e188b905d6c97e4448e0ab27b93d99694bc4fbaa7d" Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.946858 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/memcached-0" Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.952863 4923 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="horizon-kuttl-tests/keystone12c3-account-delete-9mvjj" secret="" err="secret \"galera-openstack-dockercfg-p97f5\" not found" Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.952902 4923 scope.go:117] "RemoveContainer" containerID="2420d900b3eea837ab2360c14eefe9a4493c119861a43609bfcdae7560be47b2" Mar 21 04:38:00 crc kubenswrapper[4923]: E0321 04:38:00.953189 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-delete\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-delete pod=keystone12c3-account-delete-9mvjj_horizon-kuttl-tests(80d61db4-20c9-4ea6-a8d7-435f200abfc4)\"" pod="horizon-kuttl-tests/keystone12c3-account-delete-9mvjj" podUID="80d61db4-20c9-4ea6-a8d7-435f200abfc4" Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.954906 4923 generic.go:334] "Generic (PLEG): container finished" podID="25ef91ae-d440-42d3-b259-8eacbb269ee0" containerID="bef0072c8942a47f961e8b32d5c002c89bced5e3cf201f72c74e2ba70ac1c2c0" exitCode=0 Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.955072 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-8598c6cb74-zm4b6" event={"ID":"25ef91ae-d440-42d3-b259-8eacbb269ee0","Type":"ContainerDied","Data":"bef0072c8942a47f961e8b32d5c002c89bced5e3cf201f72c74e2ba70ac1c2c0"} Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.956878 4923 generic.go:334] "Generic (PLEG): container finished" podID="19d00725-90e1-4ad2-93b2-b5e71145cd2c" containerID="fcee0155bef288c395693ec8daa803b950d7a17971884f8b369e5446dbadb6d8" exitCode=0 Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.956895 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6b7c57b7cc-h4dgh" event={"ID":"19d00725-90e1-4ad2-93b2-b5e71145cd2c","Type":"ContainerDied","Data":"fcee0155bef288c395693ec8daa803b950d7a17971884f8b369e5446dbadb6d8"} Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.987830 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567798-ddkkh"] Mar 21 04:38:00 crc kubenswrapper[4923]: I0321 04:38:00.993774 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/memcached-0"] Mar 21 04:38:00 crc kubenswrapper[4923]: W0321 04:38:00.997169 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3134e8ba_8705_4780_aa7f_5a644e3949ee.slice/crio-8e7c0e75eab83fa1c0077facf99484660619a369f278ed00880c7ccc8ea1ae94 WatchSource:0}: Error finding container 8e7c0e75eab83fa1c0077facf99484660619a369f278ed00880c7ccc8ea1ae94: Status 404 returned error can't find the container with id 8e7c0e75eab83fa1c0077facf99484660619a369f278ed00880c7ccc8ea1ae94 Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.000996 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/memcached-0"] Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.006532 4923 scope.go:117] "RemoveContainer" containerID="b42c65f9194744583b1264e188b905d6c97e4448e0ab27b93d99694bc4fbaa7d" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.006982 4923 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 21 04:38:01 crc kubenswrapper[4923]: E0321 04:38:01.007450 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b42c65f9194744583b1264e188b905d6c97e4448e0ab27b93d99694bc4fbaa7d\": container with ID starting with b42c65f9194744583b1264e188b905d6c97e4448e0ab27b93d99694bc4fbaa7d not found: ID does not exist" containerID="b42c65f9194744583b1264e188b905d6c97e4448e0ab27b93d99694bc4fbaa7d" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.007474 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b42c65f9194744583b1264e188b905d6c97e4448e0ab27b93d99694bc4fbaa7d"} err="failed to get container status \"b42c65f9194744583b1264e188b905d6c97e4448e0ab27b93d99694bc4fbaa7d\": rpc error: code = NotFound desc = could not find container \"b42c65f9194744583b1264e188b905d6c97e4448e0ab27b93d99694bc4fbaa7d\": container with ID starting with b42c65f9194744583b1264e188b905d6c97e4448e0ab27b93d99694bc4fbaa7d not found: ID does not exist" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.034292 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6vd8\" (UniqueName: \"kubernetes.io/projected/09f2119f-aa10-497e-bc3b-547681df5eb4-kube-api-access-c6vd8\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.034339 4923 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/09f2119f-aa10-497e-bc3b-547681df5eb4-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.034351 4923 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/09f2119f-aa10-497e-bc3b-547681df5eb4-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.058670 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-8598c6cb74-zm4b6" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.061918 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6b7c57b7cc-h4dgh" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.066457 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-index-zldnz" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.138269 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25ef91ae-d440-42d3-b259-8eacbb269ee0-scripts\") pod \"25ef91ae-d440-42d3-b259-8eacbb269ee0\" (UID: \"25ef91ae-d440-42d3-b259-8eacbb269ee0\") " Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.138662 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25ef91ae-d440-42d3-b259-8eacbb269ee0-fernet-keys\") pod \"25ef91ae-d440-42d3-b259-8eacbb269ee0\" (UID: \"25ef91ae-d440-42d3-b259-8eacbb269ee0\") " Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.138712 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/19d00725-90e1-4ad2-93b2-b5e71145cd2c-webhook-cert\") pod \"19d00725-90e1-4ad2-93b2-b5e71145cd2c\" (UID: \"19d00725-90e1-4ad2-93b2-b5e71145cd2c\") " Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.138766 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ef91ae-d440-42d3-b259-8eacbb269ee0-config-data\") pod \"25ef91ae-d440-42d3-b259-8eacbb269ee0\" (UID: \"25ef91ae-d440-42d3-b259-8eacbb269ee0\") " Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.138794 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/19d00725-90e1-4ad2-93b2-b5e71145cd2c-apiservice-cert\") pod \"19d00725-90e1-4ad2-93b2-b5e71145cd2c\" (UID: \"19d00725-90e1-4ad2-93b2-b5e71145cd2c\") " Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.138837 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/25ef91ae-d440-42d3-b259-8eacbb269ee0-credential-keys\") pod \"25ef91ae-d440-42d3-b259-8eacbb269ee0\" (UID: \"25ef91ae-d440-42d3-b259-8eacbb269ee0\") " Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.138936 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr6hq\" (UniqueName: \"kubernetes.io/projected/19d00725-90e1-4ad2-93b2-b5e71145cd2c-kube-api-access-nr6hq\") pod \"19d00725-90e1-4ad2-93b2-b5e71145cd2c\" (UID: \"19d00725-90e1-4ad2-93b2-b5e71145cd2c\") " Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.138969 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9nsg\" (UniqueName: \"kubernetes.io/projected/25ef91ae-d440-42d3-b259-8eacbb269ee0-kube-api-access-p9nsg\") pod \"25ef91ae-d440-42d3-b259-8eacbb269ee0\" (UID: \"25ef91ae-d440-42d3-b259-8eacbb269ee0\") " Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.138991 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96qn6\" (UniqueName: \"kubernetes.io/projected/fa331624-aac7-4683-abe0-7e0f37b4b121-kube-api-access-96qn6\") pod \"fa331624-aac7-4683-abe0-7e0f37b4b121\" (UID: \"fa331624-aac7-4683-abe0-7e0f37b4b121\") " Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.154504 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25ef91ae-d440-42d3-b259-8eacbb269ee0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "25ef91ae-d440-42d3-b259-8eacbb269ee0" (UID: "25ef91ae-d440-42d3-b259-8eacbb269ee0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.159044 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19d00725-90e1-4ad2-93b2-b5e71145cd2c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "19d00725-90e1-4ad2-93b2-b5e71145cd2c" (UID: "19d00725-90e1-4ad2-93b2-b5e71145cd2c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.160733 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25ef91ae-d440-42d3-b259-8eacbb269ee0-kube-api-access-p9nsg" (OuterVolumeSpecName: "kube-api-access-p9nsg") pod "25ef91ae-d440-42d3-b259-8eacbb269ee0" (UID: "25ef91ae-d440-42d3-b259-8eacbb269ee0"). InnerVolumeSpecName "kube-api-access-p9nsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.162657 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25ef91ae-d440-42d3-b259-8eacbb269ee0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "25ef91ae-d440-42d3-b259-8eacbb269ee0" (UID: "25ef91ae-d440-42d3-b259-8eacbb269ee0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.162896 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19d00725-90e1-4ad2-93b2-b5e71145cd2c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "19d00725-90e1-4ad2-93b2-b5e71145cd2c" (UID: "19d00725-90e1-4ad2-93b2-b5e71145cd2c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.164501 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa331624-aac7-4683-abe0-7e0f37b4b121-kube-api-access-96qn6" (OuterVolumeSpecName: "kube-api-access-96qn6") pod "fa331624-aac7-4683-abe0-7e0f37b4b121" (UID: "fa331624-aac7-4683-abe0-7e0f37b4b121"). InnerVolumeSpecName "kube-api-access-96qn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.164708 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19d00725-90e1-4ad2-93b2-b5e71145cd2c-kube-api-access-nr6hq" (OuterVolumeSpecName: "kube-api-access-nr6hq") pod "19d00725-90e1-4ad2-93b2-b5e71145cd2c" (UID: "19d00725-90e1-4ad2-93b2-b5e71145cd2c"). InnerVolumeSpecName "kube-api-access-nr6hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.167496 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25ef91ae-d440-42d3-b259-8eacbb269ee0-scripts" (OuterVolumeSpecName: "scripts") pod "25ef91ae-d440-42d3-b259-8eacbb269ee0" (UID: "25ef91ae-d440-42d3-b259-8eacbb269ee0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.188057 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25ef91ae-d440-42d3-b259-8eacbb269ee0-config-data" (OuterVolumeSpecName: "config-data") pod "25ef91ae-d440-42d3-b259-8eacbb269ee0" (UID: "25ef91ae-d440-42d3-b259-8eacbb269ee0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.240567 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr6hq\" (UniqueName: \"kubernetes.io/projected/19d00725-90e1-4ad2-93b2-b5e71145cd2c-kube-api-access-nr6hq\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.240599 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9nsg\" (UniqueName: \"kubernetes.io/projected/25ef91ae-d440-42d3-b259-8eacbb269ee0-kube-api-access-p9nsg\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.240610 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96qn6\" (UniqueName: \"kubernetes.io/projected/fa331624-aac7-4683-abe0-7e0f37b4b121-kube-api-access-96qn6\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.240618 4923 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25ef91ae-d440-42d3-b259-8eacbb269ee0-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.240627 4923 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25ef91ae-d440-42d3-b259-8eacbb269ee0-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.240637 4923 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/19d00725-90e1-4ad2-93b2-b5e71145cd2c-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.240646 4923 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ef91ae-d440-42d3-b259-8eacbb269ee0-config-data\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.240654 4923 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/19d00725-90e1-4ad2-93b2-b5e71145cd2c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.240665 4923 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/25ef91ae-d440-42d3-b259-8eacbb269ee0-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.500728 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.544640 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cz64\" (UniqueName: \"kubernetes.io/projected/9701db56-65b1-4cee-8942-69fc9cc4e7b8-kube-api-access-7cz64\") pod \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.544697 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9701db56-65b1-4cee-8942-69fc9cc4e7b8-rabbitmq-plugins\") pod \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.544746 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9701db56-65b1-4cee-8942-69fc9cc4e7b8-rabbitmq-confd\") pod \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.544798 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9701db56-65b1-4cee-8942-69fc9cc4e7b8-plugins-conf\") pod \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.544830 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9701db56-65b1-4cee-8942-69fc9cc4e7b8-erlang-cookie-secret\") pod \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.544894 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9701db56-65b1-4cee-8942-69fc9cc4e7b8-rabbitmq-erlang-cookie\") pod \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.544947 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9701db56-65b1-4cee-8942-69fc9cc4e7b8-pod-info\") pod \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.545070 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c882a054-a638-41a8-95e3-fce60ed92425\") pod \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\" (UID: \"9701db56-65b1-4cee-8942-69fc9cc4e7b8\") " Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.545304 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9701db56-65b1-4cee-8942-69fc9cc4e7b8-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9701db56-65b1-4cee-8942-69fc9cc4e7b8" (UID: "9701db56-65b1-4cee-8942-69fc9cc4e7b8"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.545363 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9701db56-65b1-4cee-8942-69fc9cc4e7b8-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9701db56-65b1-4cee-8942-69fc9cc4e7b8" (UID: "9701db56-65b1-4cee-8942-69fc9cc4e7b8"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.545372 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9701db56-65b1-4cee-8942-69fc9cc4e7b8-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9701db56-65b1-4cee-8942-69fc9cc4e7b8" (UID: "9701db56-65b1-4cee-8942-69fc9cc4e7b8"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.548491 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9701db56-65b1-4cee-8942-69fc9cc4e7b8-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9701db56-65b1-4cee-8942-69fc9cc4e7b8" (UID: "9701db56-65b1-4cee-8942-69fc9cc4e7b8"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.548575 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9701db56-65b1-4cee-8942-69fc9cc4e7b8-kube-api-access-7cz64" (OuterVolumeSpecName: "kube-api-access-7cz64") pod "9701db56-65b1-4cee-8942-69fc9cc4e7b8" (UID: "9701db56-65b1-4cee-8942-69fc9cc4e7b8"). InnerVolumeSpecName "kube-api-access-7cz64". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.559581 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9701db56-65b1-4cee-8942-69fc9cc4e7b8-pod-info" (OuterVolumeSpecName: "pod-info") pod "9701db56-65b1-4cee-8942-69fc9cc4e7b8" (UID: "9701db56-65b1-4cee-8942-69fc9cc4e7b8"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.567396 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c882a054-a638-41a8-95e3-fce60ed92425" (OuterVolumeSpecName: "persistence") pod "9701db56-65b1-4cee-8942-69fc9cc4e7b8" (UID: "9701db56-65b1-4cee-8942-69fc9cc4e7b8"). InnerVolumeSpecName "pvc-c882a054-a638-41a8-95e3-fce60ed92425". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.605897 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9701db56-65b1-4cee-8942-69fc9cc4e7b8-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9701db56-65b1-4cee-8942-69fc9cc4e7b8" (UID: "9701db56-65b1-4cee-8942-69fc9cc4e7b8"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.646907 4923 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9701db56-65b1-4cee-8942-69fc9cc4e7b8-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.646941 4923 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9701db56-65b1-4cee-8942-69fc9cc4e7b8-pod-info\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.646979 4923 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c882a054-a638-41a8-95e3-fce60ed92425\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c882a054-a638-41a8-95e3-fce60ed92425\") on node \"crc\" " Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.646991 4923 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9701db56-65b1-4cee-8942-69fc9cc4e7b8-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.647003 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cz64\" (UniqueName: \"kubernetes.io/projected/9701db56-65b1-4cee-8942-69fc9cc4e7b8-kube-api-access-7cz64\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.647013 4923 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9701db56-65b1-4cee-8942-69fc9cc4e7b8-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.647021 4923 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9701db56-65b1-4cee-8942-69fc9cc4e7b8-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.647030 4923 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9701db56-65b1-4cee-8942-69fc9cc4e7b8-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.661220 4923 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.661363 4923 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c882a054-a638-41a8-95e3-fce60ed92425" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c882a054-a638-41a8-95e3-fce60ed92425") on node "crc" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.750566 4923 reconciler_common.go:293] "Volume detached for volume \"pvc-c882a054-a638-41a8-95e3-fce60ed92425\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c882a054-a638-41a8-95e3-fce60ed92425\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.976207 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-index-zldnz" event={"ID":"fa331624-aac7-4683-abe0-7e0f37b4b121","Type":"ContainerDied","Data":"f76367a85a99a73fe0d13a5af52e9df0e981daa4d6fecec3f1f31f495a4566c9"} Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.976553 4923 scope.go:117] "RemoveContainer" containerID="28aa09ba1ea8eee400bbbd93f70337c969b23d60b776918623fe6be8b69371fb" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.976572 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-index-zldnz" Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.993381 4923 generic.go:334] "Generic (PLEG): container finished" podID="7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d" containerID="558281794c538bc786bd95ab7ca67e12cc9cd51b4f5f3788849264ef7aabda98" exitCode=0 Mar 21 04:38:01 crc kubenswrapper[4923]: I0321 04:38:01.993479 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-1" event={"ID":"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d","Type":"ContainerDied","Data":"558281794c538bc786bd95ab7ca67e12cc9cd51b4f5f3788849264ef7aabda98"} Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.000741 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone-8598c6cb74-zm4b6" event={"ID":"25ef91ae-d440-42d3-b259-8eacbb269ee0","Type":"ContainerDied","Data":"b361b312c78daefb0aad02be0ac8c9a8844a370f3a89a1e2718f5fc3701d7620"} Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.000836 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone-8598c6cb74-zm4b6" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.003354 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567798-ddkkh" event={"ID":"3134e8ba-8705-4780-aa7f-5a644e3949ee","Type":"ContainerStarted","Data":"8e7c0e75eab83fa1c0077facf99484660619a369f278ed00880c7ccc8ea1ae94"} Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.014373 4923 generic.go:334] "Generic (PLEG): container finished" podID="9701db56-65b1-4cee-8942-69fc9cc4e7b8" containerID="ddc219a38dcfdaa6c3463b79bf77a10620190660b9d64e171b34ced1fe9f8fcd" exitCode=0 Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.014503 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/rabbitmq-server-0" event={"ID":"9701db56-65b1-4cee-8942-69fc9cc4e7b8","Type":"ContainerDied","Data":"ddc219a38dcfdaa6c3463b79bf77a10620190660b9d64e171b34ced1fe9f8fcd"} Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.014527 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/rabbitmq-server-0" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.014541 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/rabbitmq-server-0" event={"ID":"9701db56-65b1-4cee-8942-69fc9cc4e7b8","Type":"ContainerDied","Data":"a2734637b87ded42751ed71f031d0aa820539df79dc396c19fe2866b3e4d60ba"} Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.024976 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6b7c57b7cc-h4dgh" event={"ID":"19d00725-90e1-4ad2-93b2-b5e71145cd2c","Type":"ContainerDied","Data":"2f34036d132c39e1486742d3cd33ff83435cd29b128c4e78eca5b3a2514e85cf"} Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.025058 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6b7c57b7cc-h4dgh" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.035210 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-77c5d8b87c-gmbv2"] Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.035244 4923 scope.go:117] "RemoveContainer" containerID="bef0072c8942a47f961e8b32d5c002c89bced5e3cf201f72c74e2ba70ac1c2c0" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.036371 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-controller-manager-77c5d8b87c-gmbv2" podUID="9be365be-2d90-4b60-88e4-db66f0d6192f" containerName="manager" containerID="cri-o://b7823f2b4eb44cdd06496e24346109dd9ea5837150249eff066e26245951f0b3" gracePeriod=10 Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.044572 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/horizon-operator-index-zldnz"] Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.049644 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/horizon-operator-index-zldnz"] Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.063817 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/keystone-8598c6cb74-zm4b6"] Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.069838 4923 scope.go:117] "RemoveContainer" containerID="ddc219a38dcfdaa6c3463b79bf77a10620190660b9d64e171b34ced1fe9f8fcd" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.069960 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/keystone-8598c6cb74-zm4b6"] Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.073598 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6b7c57b7cc-h4dgh"] Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.077196 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6b7c57b7cc-h4dgh"] Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.100410 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/rabbitmq-server-0"] Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.108831 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/rabbitmq-server-0"] Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.193432 4923 scope.go:117] "RemoveContainer" containerID="7b2ddaee519096a847bbb851a5e745ce3f8275993edc65df7976bdad952d6bc9" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.209156 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/openstack-galera-1" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.245125 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="horizon-kuttl-tests/openstack-galera-0" podUID="0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0" containerName="galera" containerID="cri-o://8728941bb7ac0272be4ef2e7a5ad04b2ff20128c08b2d09a809a80ea98990734" gracePeriod=26 Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.247063 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-jqj57"] Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.247446 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-index-jqj57" podUID="862bd87a-5130-41e2-a883-ac803db8df3b" containerName="registry-server" containerID="cri-o://21cd675804de07353a74fcc3531873c392f53da51530f4fef13ecdd4abf8da25" gracePeriod=30 Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.256616 4923 scope.go:117] "RemoveContainer" containerID="ddc219a38dcfdaa6c3463b79bf77a10620190660b9d64e171b34ced1fe9f8fcd" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.257361 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-kolla-config\") pod \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\" (UID: \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\") " Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.257409 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-config-data-generated\") pod \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\" (UID: \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\") " Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.257507 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\" (UID: \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\") " Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.257659 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-operator-scripts\") pod \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\" (UID: \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\") " Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.257703 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99fwg\" (UniqueName: \"kubernetes.io/projected/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-kube-api-access-99fwg\") pod \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\" (UID: \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\") " Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.257741 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-config-data-default\") pod \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\" (UID: \"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d\") " Mar 21 04:38:02 crc kubenswrapper[4923]: E0321 04:38:02.258221 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddc219a38dcfdaa6c3463b79bf77a10620190660b9d64e171b34ced1fe9f8fcd\": container with ID starting with ddc219a38dcfdaa6c3463b79bf77a10620190660b9d64e171b34ced1fe9f8fcd not found: ID does not exist" containerID="ddc219a38dcfdaa6c3463b79bf77a10620190660b9d64e171b34ced1fe9f8fcd" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.258370 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddc219a38dcfdaa6c3463b79bf77a10620190660b9d64e171b34ced1fe9f8fcd"} err="failed to get container status \"ddc219a38dcfdaa6c3463b79bf77a10620190660b9d64e171b34ced1fe9f8fcd\": rpc error: code = NotFound desc = could not find container \"ddc219a38dcfdaa6c3463b79bf77a10620190660b9d64e171b34ced1fe9f8fcd\": container with ID starting with ddc219a38dcfdaa6c3463b79bf77a10620190660b9d64e171b34ced1fe9f8fcd not found: ID does not exist" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.258468 4923 scope.go:117] "RemoveContainer" containerID="7b2ddaee519096a847bbb851a5e745ce3f8275993edc65df7976bdad952d6bc9" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.259777 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d" (UID: "7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:38:02 crc kubenswrapper[4923]: E0321 04:38:02.260388 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b2ddaee519096a847bbb851a5e745ce3f8275993edc65df7976bdad952d6bc9\": container with ID starting with 7b2ddaee519096a847bbb851a5e745ce3f8275993edc65df7976bdad952d6bc9 not found: ID does not exist" containerID="7b2ddaee519096a847bbb851a5e745ce3f8275993edc65df7976bdad952d6bc9" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.260423 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b2ddaee519096a847bbb851a5e745ce3f8275993edc65df7976bdad952d6bc9"} err="failed to get container status \"7b2ddaee519096a847bbb851a5e745ce3f8275993edc65df7976bdad952d6bc9\": rpc error: code = NotFound desc = could not find container \"7b2ddaee519096a847bbb851a5e745ce3f8275993edc65df7976bdad952d6bc9\": container with ID starting with 7b2ddaee519096a847bbb851a5e745ce3f8275993edc65df7976bdad952d6bc9 not found: ID does not exist" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.260455 4923 scope.go:117] "RemoveContainer" containerID="fcee0155bef288c395693ec8daa803b950d7a17971884f8b369e5446dbadb6d8" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.260865 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d" (UID: "7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.260966 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d" (UID: "7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.261258 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d" (UID: "7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.271251 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-kube-api-access-99fwg" (OuterVolumeSpecName: "kube-api-access-99fwg") pod "7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d" (UID: "7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d"). InnerVolumeSpecName "kube-api-access-99fwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.280298 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "mysql-db") pod "7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d" (UID: "7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.294118 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m"] Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.300466 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/25b5da83b847401861089ca4a9b7591d95e0d659639006dce400c423edsxz6m"] Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.362433 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99fwg\" (UniqueName: \"kubernetes.io/projected/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-kube-api-access-99fwg\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.362533 4923 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-config-data-default\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.362546 4923 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.362557 4923 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-config-data-generated\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.362586 4923 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.362596 4923 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.367084 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09f2119f-aa10-497e-bc3b-547681df5eb4" path="/var/lib/kubelet/pods/09f2119f-aa10-497e-bc3b-547681df5eb4/volumes" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.367550 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19d00725-90e1-4ad2-93b2-b5e71145cd2c" path="/var/lib/kubelet/pods/19d00725-90e1-4ad2-93b2-b5e71145cd2c/volumes" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.368391 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25ef91ae-d440-42d3-b259-8eacbb269ee0" path="/var/lib/kubelet/pods/25ef91ae-d440-42d3-b259-8eacbb269ee0/volumes" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.368830 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58877c28-2cb2-4659-9405-9242036b8a98" path="/var/lib/kubelet/pods/58877c28-2cb2-4659-9405-9242036b8a98/volumes" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.369934 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9701db56-65b1-4cee-8942-69fc9cc4e7b8" path="/var/lib/kubelet/pods/9701db56-65b1-4cee-8942-69fc9cc4e7b8/volumes" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.370953 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa331624-aac7-4683-abe0-7e0f37b4b121" path="/var/lib/kubelet/pods/fa331624-aac7-4683-abe0-7e0f37b4b121/volumes" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.371662 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc3ac6f5-8825-4713-b073-ae95374bcd0e" path="/var/lib/kubelet/pods/fc3ac6f5-8825-4713-b073-ae95374bcd0e/volumes" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.379051 4923 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.453222 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/keystone-12c3-account-create-update-s6f52"] Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.457866 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/keystone-12c3-account-create-update-s6f52"] Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.463410 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/keystone12c3-account-delete-9mvjj"] Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.464535 4923 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.468148 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/keystone-db-create-rmgbw"] Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.469100 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-77c5d8b87c-gmbv2" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.472552 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/keystone-db-create-rmgbw"] Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.565709 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9be365be-2d90-4b60-88e4-db66f0d6192f-webhook-cert\") pod \"9be365be-2d90-4b60-88e4-db66f0d6192f\" (UID: \"9be365be-2d90-4b60-88e4-db66f0d6192f\") " Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.565761 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbdjd\" (UniqueName: \"kubernetes.io/projected/9be365be-2d90-4b60-88e4-db66f0d6192f-kube-api-access-lbdjd\") pod \"9be365be-2d90-4b60-88e4-db66f0d6192f\" (UID: \"9be365be-2d90-4b60-88e4-db66f0d6192f\") " Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.565885 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9be365be-2d90-4b60-88e4-db66f0d6192f-apiservice-cert\") pod \"9be365be-2d90-4b60-88e4-db66f0d6192f\" (UID: \"9be365be-2d90-4b60-88e4-db66f0d6192f\") " Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.569679 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9be365be-2d90-4b60-88e4-db66f0d6192f-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "9be365be-2d90-4b60-88e4-db66f0d6192f" (UID: "9be365be-2d90-4b60-88e4-db66f0d6192f"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.569748 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9be365be-2d90-4b60-88e4-db66f0d6192f-kube-api-access-lbdjd" (OuterVolumeSpecName: "kube-api-access-lbdjd") pod "9be365be-2d90-4b60-88e4-db66f0d6192f" (UID: "9be365be-2d90-4b60-88e4-db66f0d6192f"). InnerVolumeSpecName "kube-api-access-lbdjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.570354 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9be365be-2d90-4b60-88e4-db66f0d6192f-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "9be365be-2d90-4b60-88e4-db66f0d6192f" (UID: "9be365be-2d90-4b60-88e4-db66f0d6192f"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.667398 4923 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9be365be-2d90-4b60-88e4-db66f0d6192f-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.667589 4923 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9be365be-2d90-4b60-88e4-db66f0d6192f-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.667599 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbdjd\" (UniqueName: \"kubernetes.io/projected/9be365be-2d90-4b60-88e4-db66f0d6192f-kube-api-access-lbdjd\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:02 crc kubenswrapper[4923]: E0321 04:38:02.667673 4923 configmap.go:193] Couldn't get configMap horizon-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Mar 21 04:38:02 crc kubenswrapper[4923]: E0321 04:38:02.667824 4923 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80d61db4-20c9-4ea6-a8d7-435f200abfc4-operator-scripts podName:80d61db4-20c9-4ea6-a8d7-435f200abfc4 nodeName:}" failed. No retries permitted until 2026-03-21 04:38:06.667783538 +0000 UTC m=+1251.820794675 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/80d61db4-20c9-4ea6-a8d7-435f200abfc4-operator-scripts") pod "keystone12c3-account-delete-9mvjj" (UID: "80d61db4-20c9-4ea6-a8d7-435f200abfc4") : configmap "openstack-scripts" not found Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.701168 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-jqj57" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.767362 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone12c3-account-delete-9mvjj" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.870010 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzg6r\" (UniqueName: \"kubernetes.io/projected/862bd87a-5130-41e2-a883-ac803db8df3b-kube-api-access-wzg6r\") pod \"862bd87a-5130-41e2-a883-ac803db8df3b\" (UID: \"862bd87a-5130-41e2-a883-ac803db8df3b\") " Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.870115 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67p5b\" (UniqueName: \"kubernetes.io/projected/80d61db4-20c9-4ea6-a8d7-435f200abfc4-kube-api-access-67p5b\") pod \"80d61db4-20c9-4ea6-a8d7-435f200abfc4\" (UID: \"80d61db4-20c9-4ea6-a8d7-435f200abfc4\") " Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.870223 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80d61db4-20c9-4ea6-a8d7-435f200abfc4-operator-scripts\") pod \"80d61db4-20c9-4ea6-a8d7-435f200abfc4\" (UID: \"80d61db4-20c9-4ea6-a8d7-435f200abfc4\") " Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.870747 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80d61db4-20c9-4ea6-a8d7-435f200abfc4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "80d61db4-20c9-4ea6-a8d7-435f200abfc4" (UID: "80d61db4-20c9-4ea6-a8d7-435f200abfc4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.875030 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80d61db4-20c9-4ea6-a8d7-435f200abfc4-kube-api-access-67p5b" (OuterVolumeSpecName: "kube-api-access-67p5b") pod "80d61db4-20c9-4ea6-a8d7-435f200abfc4" (UID: "80d61db4-20c9-4ea6-a8d7-435f200abfc4"). InnerVolumeSpecName "kube-api-access-67p5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.875510 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/862bd87a-5130-41e2-a883-ac803db8df3b-kube-api-access-wzg6r" (OuterVolumeSpecName: "kube-api-access-wzg6r") pod "862bd87a-5130-41e2-a883-ac803db8df3b" (UID: "862bd87a-5130-41e2-a883-ac803db8df3b"). InnerVolumeSpecName "kube-api-access-wzg6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.915143 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/openstack-galera-0" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.972210 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzg6r\" (UniqueName: \"kubernetes.io/projected/862bd87a-5130-41e2-a883-ac803db8df3b-kube-api-access-wzg6r\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.972250 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67p5b\" (UniqueName: \"kubernetes.io/projected/80d61db4-20c9-4ea6-a8d7-435f200abfc4-kube-api-access-67p5b\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:02 crc kubenswrapper[4923]: I0321 04:38:02.972263 4923 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80d61db4-20c9-4ea6-a8d7-435f200abfc4-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.032174 4923 generic.go:334] "Generic (PLEG): container finished" podID="862bd87a-5130-41e2-a883-ac803db8df3b" containerID="21cd675804de07353a74fcc3531873c392f53da51530f4fef13ecdd4abf8da25" exitCode=0 Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.032258 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-jqj57" event={"ID":"862bd87a-5130-41e2-a883-ac803db8df3b","Type":"ContainerDied","Data":"21cd675804de07353a74fcc3531873c392f53da51530f4fef13ecdd4abf8da25"} Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.032294 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-jqj57" event={"ID":"862bd87a-5130-41e2-a883-ac803db8df3b","Type":"ContainerDied","Data":"aa47fdadd385964b8feed7e563cc9354c833e2543397ea60151d38f8d6999e0a"} Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.032302 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-jqj57" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.032317 4923 scope.go:117] "RemoveContainer" containerID="21cd675804de07353a74fcc3531873c392f53da51530f4fef13ecdd4abf8da25" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.036554 4923 generic.go:334] "Generic (PLEG): container finished" podID="9be365be-2d90-4b60-88e4-db66f0d6192f" containerID="b7823f2b4eb44cdd06496e24346109dd9ea5837150249eff066e26245951f0b3" exitCode=0 Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.036612 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-77c5d8b87c-gmbv2" event={"ID":"9be365be-2d90-4b60-88e4-db66f0d6192f","Type":"ContainerDied","Data":"b7823f2b4eb44cdd06496e24346109dd9ea5837150249eff066e26245951f0b3"} Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.036635 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-77c5d8b87c-gmbv2" event={"ID":"9be365be-2d90-4b60-88e4-db66f0d6192f","Type":"ContainerDied","Data":"c5c4925cae33c430e0ad84f0237eb6cd72799c6db52cc22ec0d5542bccf02ce9"} Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.036692 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-77c5d8b87c-gmbv2" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.039874 4923 generic.go:334] "Generic (PLEG): container finished" podID="0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0" containerID="8728941bb7ac0272be4ef2e7a5ad04b2ff20128c08b2d09a809a80ea98990734" exitCode=0 Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.039918 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-0" event={"ID":"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0","Type":"ContainerDied","Data":"8728941bb7ac0272be4ef2e7a5ad04b2ff20128c08b2d09a809a80ea98990734"} Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.039939 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-0" event={"ID":"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0","Type":"ContainerDied","Data":"45a669c11dadb29131042b02aea11571e75ab346d95888a0f0dbaa864e891131"} Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.039988 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/openstack-galera-0" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.045538 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/openstack-galera-1" event={"ID":"7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d","Type":"ContainerDied","Data":"b1cd6681a0d3dab3f748fd7d0211351745e83efa254960dac801b388a0700637"} Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.045598 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/openstack-galera-1" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.047485 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="horizon-kuttl-tests/keystone12c3-account-delete-9mvjj" event={"ID":"80d61db4-20c9-4ea6-a8d7-435f200abfc4","Type":"ContainerDied","Data":"b1a3fa65e83c29cce818d090b6835b66259c907abace49d3a54530c5c643c2c1"} Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.047556 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="horizon-kuttl-tests/keystone12c3-account-delete-9mvjj" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.051693 4923 generic.go:334] "Generic (PLEG): container finished" podID="3134e8ba-8705-4780-aa7f-5a644e3949ee" containerID="ff8855d19fbe371a25615edf487270a2b9271ec581f85e810396c50aea439a38" exitCode=0 Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.051976 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567798-ddkkh" event={"ID":"3134e8ba-8705-4780-aa7f-5a644e3949ee","Type":"ContainerDied","Data":"ff8855d19fbe371a25615edf487270a2b9271ec581f85e810396c50aea439a38"} Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.053454 4923 scope.go:117] "RemoveContainer" containerID="21cd675804de07353a74fcc3531873c392f53da51530f4fef13ecdd4abf8da25" Mar 21 04:38:03 crc kubenswrapper[4923]: E0321 04:38:03.054168 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21cd675804de07353a74fcc3531873c392f53da51530f4fef13ecdd4abf8da25\": container with ID starting with 21cd675804de07353a74fcc3531873c392f53da51530f4fef13ecdd4abf8da25 not found: ID does not exist" containerID="21cd675804de07353a74fcc3531873c392f53da51530f4fef13ecdd4abf8da25" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.054246 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21cd675804de07353a74fcc3531873c392f53da51530f4fef13ecdd4abf8da25"} err="failed to get container status \"21cd675804de07353a74fcc3531873c392f53da51530f4fef13ecdd4abf8da25\": rpc error: code = NotFound desc = could not find container \"21cd675804de07353a74fcc3531873c392f53da51530f4fef13ecdd4abf8da25\": container with ID starting with 21cd675804de07353a74fcc3531873c392f53da51530f4fef13ecdd4abf8da25 not found: ID does not exist" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.054299 4923 scope.go:117] "RemoveContainer" containerID="b7823f2b4eb44cdd06496e24346109dd9ea5837150249eff066e26245951f0b3" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.078413 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-config-data-default\") pod \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\" (UID: \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\") " Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.078506 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-kolla-config\") pod \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\" (UID: \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\") " Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.078568 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\" (UID: \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\") " Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.078653 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-operator-scripts\") pod \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\" (UID: \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\") " Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.078743 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgtjp\" (UniqueName: \"kubernetes.io/projected/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-kube-api-access-cgtjp\") pod \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\" (UID: \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\") " Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.078793 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-config-data-generated\") pod \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\" (UID: \"0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0\") " Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.079447 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0" (UID: "0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.079948 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0" (UID: "0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.080216 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0" (UID: "0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.081377 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0" (UID: "0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.086544 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-kube-api-access-cgtjp" (OuterVolumeSpecName: "kube-api-access-cgtjp") pod "0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0" (UID: "0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0"). InnerVolumeSpecName "kube-api-access-cgtjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.091422 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/openstack-galera-1"] Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.100718 4923 scope.go:117] "RemoveContainer" containerID="b7823f2b4eb44cdd06496e24346109dd9ea5837150249eff066e26245951f0b3" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.106277 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/openstack-galera-1"] Mar 21 04:38:03 crc kubenswrapper[4923]: E0321 04:38:03.107517 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7823f2b4eb44cdd06496e24346109dd9ea5837150249eff066e26245951f0b3\": container with ID starting with b7823f2b4eb44cdd06496e24346109dd9ea5837150249eff066e26245951f0b3 not found: ID does not exist" containerID="b7823f2b4eb44cdd06496e24346109dd9ea5837150249eff066e26245951f0b3" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.107563 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7823f2b4eb44cdd06496e24346109dd9ea5837150249eff066e26245951f0b3"} err="failed to get container status \"b7823f2b4eb44cdd06496e24346109dd9ea5837150249eff066e26245951f0b3\": rpc error: code = NotFound desc = could not find container \"b7823f2b4eb44cdd06496e24346109dd9ea5837150249eff066e26245951f0b3\": container with ID starting with b7823f2b4eb44cdd06496e24346109dd9ea5837150249eff066e26245951f0b3 not found: ID does not exist" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.107593 4923 scope.go:117] "RemoveContainer" containerID="8728941bb7ac0272be4ef2e7a5ad04b2ff20128c08b2d09a809a80ea98990734" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.107632 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "mysql-db") pod "0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0" (UID: "0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.122437 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-77c5d8b87c-gmbv2"] Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.135429 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-77c5d8b87c-gmbv2"] Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.138592 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/keystone12c3-account-delete-9mvjj"] Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.144040 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/keystone12c3-account-delete-9mvjj"] Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.148304 4923 scope.go:117] "RemoveContainer" containerID="4396985b47f7912ba2f1c2b7009ac0b7d3d3edfca0883a5a0464ee6fb9e648e0" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.149583 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-jqj57"] Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.154758 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/keystone-operator-index-jqj57"] Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.168196 4923 scope.go:117] "RemoveContainer" containerID="8728941bb7ac0272be4ef2e7a5ad04b2ff20128c08b2d09a809a80ea98990734" Mar 21 04:38:03 crc kubenswrapper[4923]: E0321 04:38:03.168647 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8728941bb7ac0272be4ef2e7a5ad04b2ff20128c08b2d09a809a80ea98990734\": container with ID starting with 8728941bb7ac0272be4ef2e7a5ad04b2ff20128c08b2d09a809a80ea98990734 not found: ID does not exist" containerID="8728941bb7ac0272be4ef2e7a5ad04b2ff20128c08b2d09a809a80ea98990734" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.168676 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8728941bb7ac0272be4ef2e7a5ad04b2ff20128c08b2d09a809a80ea98990734"} err="failed to get container status \"8728941bb7ac0272be4ef2e7a5ad04b2ff20128c08b2d09a809a80ea98990734\": rpc error: code = NotFound desc = could not find container \"8728941bb7ac0272be4ef2e7a5ad04b2ff20128c08b2d09a809a80ea98990734\": container with ID starting with 8728941bb7ac0272be4ef2e7a5ad04b2ff20128c08b2d09a809a80ea98990734 not found: ID does not exist" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.168697 4923 scope.go:117] "RemoveContainer" containerID="4396985b47f7912ba2f1c2b7009ac0b7d3d3edfca0883a5a0464ee6fb9e648e0" Mar 21 04:38:03 crc kubenswrapper[4923]: E0321 04:38:03.169309 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4396985b47f7912ba2f1c2b7009ac0b7d3d3edfca0883a5a0464ee6fb9e648e0\": container with ID starting with 4396985b47f7912ba2f1c2b7009ac0b7d3d3edfca0883a5a0464ee6fb9e648e0 not found: ID does not exist" containerID="4396985b47f7912ba2f1c2b7009ac0b7d3d3edfca0883a5a0464ee6fb9e648e0" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.169428 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4396985b47f7912ba2f1c2b7009ac0b7d3d3edfca0883a5a0464ee6fb9e648e0"} err="failed to get container status \"4396985b47f7912ba2f1c2b7009ac0b7d3d3edfca0883a5a0464ee6fb9e648e0\": rpc error: code = NotFound desc = could not find container \"4396985b47f7912ba2f1c2b7009ac0b7d3d3edfca0883a5a0464ee6fb9e648e0\": container with ID starting with 4396985b47f7912ba2f1c2b7009ac0b7d3d3edfca0883a5a0464ee6fb9e648e0 not found: ID does not exist" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.169465 4923 scope.go:117] "RemoveContainer" containerID="558281794c538bc786bd95ab7ca67e12cc9cd51b4f5f3788849264ef7aabda98" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.181350 4923 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-config-data-default\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.181388 4923 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.181429 4923 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.181443 4923 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.182070 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgtjp\" (UniqueName: \"kubernetes.io/projected/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-kube-api-access-cgtjp\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.182109 4923 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0-config-data-generated\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.187696 4923 scope.go:117] "RemoveContainer" containerID="70eb13e63c244ffe7b910e005545083669078f0ed021473a043c0d8bf2ab0bbb" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.196585 4923 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.212517 4923 scope.go:117] "RemoveContainer" containerID="2420d900b3eea837ab2360c14eefe9a4493c119861a43609bfcdae7560be47b2" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.283605 4923 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.371746 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["horizon-kuttl-tests/openstack-galera-0"] Mar 21 04:38:03 crc kubenswrapper[4923]: I0321 04:38:03.371818 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["horizon-kuttl-tests/openstack-galera-0"] Mar 21 04:38:04 crc kubenswrapper[4923]: I0321 04:38:04.326597 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567798-ddkkh" Mar 21 04:38:04 crc kubenswrapper[4923]: I0321 04:38:04.370727 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0" path="/var/lib/kubelet/pods/0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0/volumes" Mar 21 04:38:04 crc kubenswrapper[4923]: I0321 04:38:04.372110 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d" path="/var/lib/kubelet/pods/7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d/volumes" Mar 21 04:38:04 crc kubenswrapper[4923]: I0321 04:38:04.373112 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80d61db4-20c9-4ea6-a8d7-435f200abfc4" path="/var/lib/kubelet/pods/80d61db4-20c9-4ea6-a8d7-435f200abfc4/volumes" Mar 21 04:38:04 crc kubenswrapper[4923]: I0321 04:38:04.374726 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="862bd87a-5130-41e2-a883-ac803db8df3b" path="/var/lib/kubelet/pods/862bd87a-5130-41e2-a883-ac803db8df3b/volumes" Mar 21 04:38:04 crc kubenswrapper[4923]: I0321 04:38:04.375546 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9be365be-2d90-4b60-88e4-db66f0d6192f" path="/var/lib/kubelet/pods/9be365be-2d90-4b60-88e4-db66f0d6192f/volumes" Mar 21 04:38:04 crc kubenswrapper[4923]: I0321 04:38:04.376342 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad250f90-bc88-40f3-9795-a800d8ac3af0" path="/var/lib/kubelet/pods/ad250f90-bc88-40f3-9795-a800d8ac3af0/volumes" Mar 21 04:38:04 crc kubenswrapper[4923]: I0321 04:38:04.378068 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2a7b6c7-0d55-48b8-b305-9bde8cc5181f" path="/var/lib/kubelet/pods/e2a7b6c7-0d55-48b8-b305-9bde8cc5181f/volumes" Mar 21 04:38:04 crc kubenswrapper[4923]: I0321 04:38:04.501473 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkj6j\" (UniqueName: \"kubernetes.io/projected/3134e8ba-8705-4780-aa7f-5a644e3949ee-kube-api-access-vkj6j\") pod \"3134e8ba-8705-4780-aa7f-5a644e3949ee\" (UID: \"3134e8ba-8705-4780-aa7f-5a644e3949ee\") " Mar 21 04:38:04 crc kubenswrapper[4923]: I0321 04:38:04.502408 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-dz4d5"] Mar 21 04:38:04 crc kubenswrapper[4923]: I0321 04:38:04.502620 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-dz4d5" podUID="c56c4706-b4d1-4fe7-bf97-7328684b55e0" containerName="operator" containerID="cri-o://b93db263cd3d538b113957fbfb47d1fbbef11fac7227f9be19125933edcc6afb" gracePeriod=10 Mar 21 04:38:04 crc kubenswrapper[4923]: I0321 04:38:04.523101 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3134e8ba-8705-4780-aa7f-5a644e3949ee-kube-api-access-vkj6j" (OuterVolumeSpecName: "kube-api-access-vkj6j") pod "3134e8ba-8705-4780-aa7f-5a644e3949ee" (UID: "3134e8ba-8705-4780-aa7f-5a644e3949ee"). InnerVolumeSpecName "kube-api-access-vkj6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:38:04 crc kubenswrapper[4923]: I0321 04:38:04.603180 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkj6j\" (UniqueName: \"kubernetes.io/projected/3134e8ba-8705-4780-aa7f-5a644e3949ee-kube-api-access-vkj6j\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:04 crc kubenswrapper[4923]: I0321 04:38:04.693791 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-5wtgm"] Mar 21 04:38:04 crc kubenswrapper[4923]: I0321 04:38:04.694009 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-index-5wtgm" podUID="9faba93f-83d8-45ad-bea2-44e730e7f3a4" containerName="registry-server" containerID="cri-o://9d2ac74ae9a53f961e8001f386d283b278f3f79067e8d495b0443ad2caa66c90" gracePeriod=30 Mar 21 04:38:04 crc kubenswrapper[4923]: I0321 04:38:04.725838 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96"] Mar 21 04:38:04 crc kubenswrapper[4923]: I0321 04:38:04.736637 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590p2p96"] Mar 21 04:38:04 crc kubenswrapper[4923]: I0321 04:38:04.854145 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-dz4d5" Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.008193 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs82h\" (UniqueName: \"kubernetes.io/projected/c56c4706-b4d1-4fe7-bf97-7328684b55e0-kube-api-access-hs82h\") pod \"c56c4706-b4d1-4fe7-bf97-7328684b55e0\" (UID: \"c56c4706-b4d1-4fe7-bf97-7328684b55e0\") " Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.013737 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c56c4706-b4d1-4fe7-bf97-7328684b55e0-kube-api-access-hs82h" (OuterVolumeSpecName: "kube-api-access-hs82h") pod "c56c4706-b4d1-4fe7-bf97-7328684b55e0" (UID: "c56c4706-b4d1-4fe7-bf97-7328684b55e0"). InnerVolumeSpecName "kube-api-access-hs82h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.078717 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-5wtgm" Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.081881 4923 generic.go:334] "Generic (PLEG): container finished" podID="c56c4706-b4d1-4fe7-bf97-7328684b55e0" containerID="b93db263cd3d538b113957fbfb47d1fbbef11fac7227f9be19125933edcc6afb" exitCode=0 Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.081981 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-dz4d5" event={"ID":"c56c4706-b4d1-4fe7-bf97-7328684b55e0","Type":"ContainerDied","Data":"b93db263cd3d538b113957fbfb47d1fbbef11fac7227f9be19125933edcc6afb"} Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.082037 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-dz4d5" event={"ID":"c56c4706-b4d1-4fe7-bf97-7328684b55e0","Type":"ContainerDied","Data":"be8ee70ba24ce2b6352e9ec1924d2a0d0c0665e282c69039b175c065697891ed"} Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.082065 4923 scope.go:117] "RemoveContainer" containerID="b93db263cd3d538b113957fbfb47d1fbbef11fac7227f9be19125933edcc6afb" Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.081979 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-dz4d5" Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.083916 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567798-ddkkh" event={"ID":"3134e8ba-8705-4780-aa7f-5a644e3949ee","Type":"ContainerDied","Data":"8e7c0e75eab83fa1c0077facf99484660619a369f278ed00880c7ccc8ea1ae94"} Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.083938 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567798-ddkkh" Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.083954 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e7c0e75eab83fa1c0077facf99484660619a369f278ed00880c7ccc8ea1ae94" Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.086290 4923 generic.go:334] "Generic (PLEG): container finished" podID="9faba93f-83d8-45ad-bea2-44e730e7f3a4" containerID="9d2ac74ae9a53f961e8001f386d283b278f3f79067e8d495b0443ad2caa66c90" exitCode=0 Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.086314 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-5wtgm" event={"ID":"9faba93f-83d8-45ad-bea2-44e730e7f3a4","Type":"ContainerDied","Data":"9d2ac74ae9a53f961e8001f386d283b278f3f79067e8d495b0443ad2caa66c90"} Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.086353 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-5wtgm" event={"ID":"9faba93f-83d8-45ad-bea2-44e730e7f3a4","Type":"ContainerDied","Data":"d7d0ce6c7f703bdbf49a583bb377876c6fb29cd459b4b30473f37eed59756d0b"} Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.086420 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-5wtgm" Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.109523 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs82h\" (UniqueName: \"kubernetes.io/projected/c56c4706-b4d1-4fe7-bf97-7328684b55e0-kube-api-access-hs82h\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.110215 4923 scope.go:117] "RemoveContainer" containerID="b93db263cd3d538b113957fbfb47d1fbbef11fac7227f9be19125933edcc6afb" Mar 21 04:38:05 crc kubenswrapper[4923]: E0321 04:38:05.110804 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b93db263cd3d538b113957fbfb47d1fbbef11fac7227f9be19125933edcc6afb\": container with ID starting with b93db263cd3d538b113957fbfb47d1fbbef11fac7227f9be19125933edcc6afb not found: ID does not exist" containerID="b93db263cd3d538b113957fbfb47d1fbbef11fac7227f9be19125933edcc6afb" Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.110899 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b93db263cd3d538b113957fbfb47d1fbbef11fac7227f9be19125933edcc6afb"} err="failed to get container status \"b93db263cd3d538b113957fbfb47d1fbbef11fac7227f9be19125933edcc6afb\": rpc error: code = NotFound desc = could not find container \"b93db263cd3d538b113957fbfb47d1fbbef11fac7227f9be19125933edcc6afb\": container with ID starting with b93db263cd3d538b113957fbfb47d1fbbef11fac7227f9be19125933edcc6afb not found: ID does not exist" Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.110960 4923 scope.go:117] "RemoveContainer" containerID="9d2ac74ae9a53f961e8001f386d283b278f3f79067e8d495b0443ad2caa66c90" Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.161440 4923 scope.go:117] "RemoveContainer" containerID="9d2ac74ae9a53f961e8001f386d283b278f3f79067e8d495b0443ad2caa66c90" Mar 21 04:38:05 crc kubenswrapper[4923]: E0321 04:38:05.162490 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d2ac74ae9a53f961e8001f386d283b278f3f79067e8d495b0443ad2caa66c90\": container with ID starting with 9d2ac74ae9a53f961e8001f386d283b278f3f79067e8d495b0443ad2caa66c90 not found: ID does not exist" containerID="9d2ac74ae9a53f961e8001f386d283b278f3f79067e8d495b0443ad2caa66c90" Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.162541 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d2ac74ae9a53f961e8001f386d283b278f3f79067e8d495b0443ad2caa66c90"} err="failed to get container status \"9d2ac74ae9a53f961e8001f386d283b278f3f79067e8d495b0443ad2caa66c90\": rpc error: code = NotFound desc = could not find container \"9d2ac74ae9a53f961e8001f386d283b278f3f79067e8d495b0443ad2caa66c90\": container with ID starting with 9d2ac74ae9a53f961e8001f386d283b278f3f79067e8d495b0443ad2caa66c90 not found: ID does not exist" Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.167101 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-dz4d5"] Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.173506 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-dz4d5"] Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.210681 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5zqr\" (UniqueName: \"kubernetes.io/projected/9faba93f-83d8-45ad-bea2-44e730e7f3a4-kube-api-access-j5zqr\") pod \"9faba93f-83d8-45ad-bea2-44e730e7f3a4\" (UID: \"9faba93f-83d8-45ad-bea2-44e730e7f3a4\") " Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.219481 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9faba93f-83d8-45ad-bea2-44e730e7f3a4-kube-api-access-j5zqr" (OuterVolumeSpecName: "kube-api-access-j5zqr") pod "9faba93f-83d8-45ad-bea2-44e730e7f3a4" (UID: "9faba93f-83d8-45ad-bea2-44e730e7f3a4"). InnerVolumeSpecName "kube-api-access-j5zqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.312524 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5zqr\" (UniqueName: \"kubernetes.io/projected/9faba93f-83d8-45ad-bea2-44e730e7f3a4-kube-api-access-j5zqr\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.380424 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567792-tpmsl"] Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.384993 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567792-tpmsl"] Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.417322 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-5wtgm"] Mar 21 04:38:05 crc kubenswrapper[4923]: I0321 04:38:05.422563 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-5wtgm"] Mar 21 04:38:06 crc kubenswrapper[4923]: I0321 04:38:06.365764 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="028ed822-e3eb-47f2-9f87-ba89b0f4e3a7" path="/var/lib/kubelet/pods/028ed822-e3eb-47f2-9f87-ba89b0f4e3a7/volumes" Mar 21 04:38:06 crc kubenswrapper[4923]: I0321 04:38:06.366454 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34f8c576-1fbc-4288-ad35-efb0873ff5cb" path="/var/lib/kubelet/pods/34f8c576-1fbc-4288-ad35-efb0873ff5cb/volumes" Mar 21 04:38:06 crc kubenswrapper[4923]: I0321 04:38:06.367074 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9faba93f-83d8-45ad-bea2-44e730e7f3a4" path="/var/lib/kubelet/pods/9faba93f-83d8-45ad-bea2-44e730e7f3a4/volumes" Mar 21 04:38:06 crc kubenswrapper[4923]: I0321 04:38:06.367903 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c56c4706-b4d1-4fe7-bf97-7328684b55e0" path="/var/lib/kubelet/pods/c56c4706-b4d1-4fe7-bf97-7328684b55e0/volumes" Mar 21 04:38:07 crc kubenswrapper[4923]: I0321 04:38:07.064458 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f474f7cc-h8gg9"] Mar 21 04:38:07 crc kubenswrapper[4923]: I0321 04:38:07.065004 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-controller-manager-5f474f7cc-h8gg9" podUID="b57d2b83-885a-44b0-b334-f0dd96568ba9" containerName="manager" containerID="cri-o://956f6af360184b38954fc6aa9485dace8cb7afef54a507668d4de1646b6b7a69" gracePeriod=10 Mar 21 04:38:07 crc kubenswrapper[4923]: I0321 04:38:07.286694 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-hkph2"] Mar 21 04:38:07 crc kubenswrapper[4923]: I0321 04:38:07.287008 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-index-hkph2" podUID="85d3f09f-876d-45c0-8631-5df235d5429e" containerName="registry-server" containerID="cri-o://7de8380639db26db34d96d42468580dabbeaa75d84c802317a1f605133b0badb" gracePeriod=30 Mar 21 04:38:07 crc kubenswrapper[4923]: I0321 04:38:07.334972 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4"] Mar 21 04:38:07 crc kubenswrapper[4923]: I0321 04:38:07.339912 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/014ac6de79e0394b0d824e62fe3d0678564789565fd857972d41b2afd46kbx4"] Mar 21 04:38:07 crc kubenswrapper[4923]: I0321 04:38:07.531057 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5f474f7cc-h8gg9" Mar 21 04:38:07 crc kubenswrapper[4923]: I0321 04:38:07.547769 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b57d2b83-885a-44b0-b334-f0dd96568ba9-webhook-cert\") pod \"b57d2b83-885a-44b0-b334-f0dd96568ba9\" (UID: \"b57d2b83-885a-44b0-b334-f0dd96568ba9\") " Mar 21 04:38:07 crc kubenswrapper[4923]: I0321 04:38:07.547844 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b57d2b83-885a-44b0-b334-f0dd96568ba9-apiservice-cert\") pod \"b57d2b83-885a-44b0-b334-f0dd96568ba9\" (UID: \"b57d2b83-885a-44b0-b334-f0dd96568ba9\") " Mar 21 04:38:07 crc kubenswrapper[4923]: I0321 04:38:07.547991 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m94pt\" (UniqueName: \"kubernetes.io/projected/b57d2b83-885a-44b0-b334-f0dd96568ba9-kube-api-access-m94pt\") pod \"b57d2b83-885a-44b0-b334-f0dd96568ba9\" (UID: \"b57d2b83-885a-44b0-b334-f0dd96568ba9\") " Mar 21 04:38:07 crc kubenswrapper[4923]: I0321 04:38:07.559767 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b57d2b83-885a-44b0-b334-f0dd96568ba9-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "b57d2b83-885a-44b0-b334-f0dd96568ba9" (UID: "b57d2b83-885a-44b0-b334-f0dd96568ba9"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:38:07 crc kubenswrapper[4923]: I0321 04:38:07.559826 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b57d2b83-885a-44b0-b334-f0dd96568ba9-kube-api-access-m94pt" (OuterVolumeSpecName: "kube-api-access-m94pt") pod "b57d2b83-885a-44b0-b334-f0dd96568ba9" (UID: "b57d2b83-885a-44b0-b334-f0dd96568ba9"). InnerVolumeSpecName "kube-api-access-m94pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:38:07 crc kubenswrapper[4923]: I0321 04:38:07.560029 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b57d2b83-885a-44b0-b334-f0dd96568ba9-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "b57d2b83-885a-44b0-b334-f0dd96568ba9" (UID: "b57d2b83-885a-44b0-b334-f0dd96568ba9"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:38:07 crc kubenswrapper[4923]: I0321 04:38:07.607813 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-hkph2" Mar 21 04:38:07 crc kubenswrapper[4923]: I0321 04:38:07.649017 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks98q\" (UniqueName: \"kubernetes.io/projected/85d3f09f-876d-45c0-8631-5df235d5429e-kube-api-access-ks98q\") pod \"85d3f09f-876d-45c0-8631-5df235d5429e\" (UID: \"85d3f09f-876d-45c0-8631-5df235d5429e\") " Mar 21 04:38:07 crc kubenswrapper[4923]: I0321 04:38:07.649262 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m94pt\" (UniqueName: \"kubernetes.io/projected/b57d2b83-885a-44b0-b334-f0dd96568ba9-kube-api-access-m94pt\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:07 crc kubenswrapper[4923]: I0321 04:38:07.649279 4923 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b57d2b83-885a-44b0-b334-f0dd96568ba9-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:07 crc kubenswrapper[4923]: I0321 04:38:07.649293 4923 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b57d2b83-885a-44b0-b334-f0dd96568ba9-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:07 crc kubenswrapper[4923]: I0321 04:38:07.652460 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85d3f09f-876d-45c0-8631-5df235d5429e-kube-api-access-ks98q" (OuterVolumeSpecName: "kube-api-access-ks98q") pod "85d3f09f-876d-45c0-8631-5df235d5429e" (UID: "85d3f09f-876d-45c0-8631-5df235d5429e"). InnerVolumeSpecName "kube-api-access-ks98q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:38:07 crc kubenswrapper[4923]: I0321 04:38:07.749931 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks98q\" (UniqueName: \"kubernetes.io/projected/85d3f09f-876d-45c0-8631-5df235d5429e-kube-api-access-ks98q\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:08 crc kubenswrapper[4923]: I0321 04:38:08.114986 4923 generic.go:334] "Generic (PLEG): container finished" podID="85d3f09f-876d-45c0-8631-5df235d5429e" containerID="7de8380639db26db34d96d42468580dabbeaa75d84c802317a1f605133b0badb" exitCode=0 Mar 21 04:38:08 crc kubenswrapper[4923]: I0321 04:38:08.115032 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-hkph2" event={"ID":"85d3f09f-876d-45c0-8631-5df235d5429e","Type":"ContainerDied","Data":"7de8380639db26db34d96d42468580dabbeaa75d84c802317a1f605133b0badb"} Mar 21 04:38:08 crc kubenswrapper[4923]: I0321 04:38:08.115079 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-hkph2" Mar 21 04:38:08 crc kubenswrapper[4923]: I0321 04:38:08.115108 4923 scope.go:117] "RemoveContainer" containerID="7de8380639db26db34d96d42468580dabbeaa75d84c802317a1f605133b0badb" Mar 21 04:38:08 crc kubenswrapper[4923]: I0321 04:38:08.115087 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-hkph2" event={"ID":"85d3f09f-876d-45c0-8631-5df235d5429e","Type":"ContainerDied","Data":"7bc31143cabf1b90f2f7f8b92d379751f874f16f6a3f65031de978006b84cadd"} Mar 21 04:38:08 crc kubenswrapper[4923]: I0321 04:38:08.117816 4923 generic.go:334] "Generic (PLEG): container finished" podID="b57d2b83-885a-44b0-b334-f0dd96568ba9" containerID="956f6af360184b38954fc6aa9485dace8cb7afef54a507668d4de1646b6b7a69" exitCode=0 Mar 21 04:38:08 crc kubenswrapper[4923]: I0321 04:38:08.117854 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5f474f7cc-h8gg9" event={"ID":"b57d2b83-885a-44b0-b334-f0dd96568ba9","Type":"ContainerDied","Data":"956f6af360184b38954fc6aa9485dace8cb7afef54a507668d4de1646b6b7a69"} Mar 21 04:38:08 crc kubenswrapper[4923]: I0321 04:38:08.117879 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5f474f7cc-h8gg9" event={"ID":"b57d2b83-885a-44b0-b334-f0dd96568ba9","Type":"ContainerDied","Data":"6a933926bd5d58e2a22f59ece4bf5b15017a647724abed52a98acc47df1e2efd"} Mar 21 04:38:08 crc kubenswrapper[4923]: I0321 04:38:08.117976 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5f474f7cc-h8gg9" Mar 21 04:38:08 crc kubenswrapper[4923]: I0321 04:38:08.143449 4923 scope.go:117] "RemoveContainer" containerID="7de8380639db26db34d96d42468580dabbeaa75d84c802317a1f605133b0badb" Mar 21 04:38:08 crc kubenswrapper[4923]: E0321 04:38:08.143894 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7de8380639db26db34d96d42468580dabbeaa75d84c802317a1f605133b0badb\": container with ID starting with 7de8380639db26db34d96d42468580dabbeaa75d84c802317a1f605133b0badb not found: ID does not exist" containerID="7de8380639db26db34d96d42468580dabbeaa75d84c802317a1f605133b0badb" Mar 21 04:38:08 crc kubenswrapper[4923]: I0321 04:38:08.144022 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7de8380639db26db34d96d42468580dabbeaa75d84c802317a1f605133b0badb"} err="failed to get container status \"7de8380639db26db34d96d42468580dabbeaa75d84c802317a1f605133b0badb\": rpc error: code = NotFound desc = could not find container \"7de8380639db26db34d96d42468580dabbeaa75d84c802317a1f605133b0badb\": container with ID starting with 7de8380639db26db34d96d42468580dabbeaa75d84c802317a1f605133b0badb not found: ID does not exist" Mar 21 04:38:08 crc kubenswrapper[4923]: I0321 04:38:08.144128 4923 scope.go:117] "RemoveContainer" containerID="956f6af360184b38954fc6aa9485dace8cb7afef54a507668d4de1646b6b7a69" Mar 21 04:38:08 crc kubenswrapper[4923]: I0321 04:38:08.158057 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-hkph2"] Mar 21 04:38:08 crc kubenswrapper[4923]: I0321 04:38:08.168661 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-index-hkph2"] Mar 21 04:38:08 crc kubenswrapper[4923]: I0321 04:38:08.173416 4923 scope.go:117] "RemoveContainer" containerID="956f6af360184b38954fc6aa9485dace8cb7afef54a507668d4de1646b6b7a69" Mar 21 04:38:08 crc kubenswrapper[4923]: E0321 04:38:08.173780 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"956f6af360184b38954fc6aa9485dace8cb7afef54a507668d4de1646b6b7a69\": container with ID starting with 956f6af360184b38954fc6aa9485dace8cb7afef54a507668d4de1646b6b7a69 not found: ID does not exist" containerID="956f6af360184b38954fc6aa9485dace8cb7afef54a507668d4de1646b6b7a69" Mar 21 04:38:08 crc kubenswrapper[4923]: I0321 04:38:08.173816 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"956f6af360184b38954fc6aa9485dace8cb7afef54a507668d4de1646b6b7a69"} err="failed to get container status \"956f6af360184b38954fc6aa9485dace8cb7afef54a507668d4de1646b6b7a69\": rpc error: code = NotFound desc = could not find container \"956f6af360184b38954fc6aa9485dace8cb7afef54a507668d4de1646b6b7a69\": container with ID starting with 956f6af360184b38954fc6aa9485dace8cb7afef54a507668d4de1646b6b7a69 not found: ID does not exist" Mar 21 04:38:08 crc kubenswrapper[4923]: I0321 04:38:08.180466 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f474f7cc-h8gg9"] Mar 21 04:38:08 crc kubenswrapper[4923]: I0321 04:38:08.183057 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f474f7cc-h8gg9"] Mar 21 04:38:08 crc kubenswrapper[4923]: I0321 04:38:08.384376 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85d3f09f-876d-45c0-8631-5df235d5429e" path="/var/lib/kubelet/pods/85d3f09f-876d-45c0-8631-5df235d5429e/volumes" Mar 21 04:38:08 crc kubenswrapper[4923]: I0321 04:38:08.385253 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b57d2b83-885a-44b0-b334-f0dd96568ba9" path="/var/lib/kubelet/pods/b57d2b83-885a-44b0-b334-f0dd96568ba9/volumes" Mar 21 04:38:08 crc kubenswrapper[4923]: I0321 04:38:08.386466 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e365be8a-9b66-42a3-87e4-8d6dcfced627" path="/var/lib/kubelet/pods/e365be8a-9b66-42a3-87e4-8d6dcfced627/volumes" Mar 21 04:38:08 crc kubenswrapper[4923]: I0321 04:38:08.863357 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6868c4d546-xvgzg"] Mar 21 04:38:08 crc kubenswrapper[4923]: I0321 04:38:08.863601 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-controller-manager-6868c4d546-xvgzg" podUID="892d8b77-92f8-489f-854d-fcbb4ce80dae" containerName="manager" containerID="cri-o://802a4ef752e5e78dc3f5fe8af54cb84e0e873b22089bd982f50b7fcce30b1bf5" gracePeriod=10 Mar 21 04:38:09 crc kubenswrapper[4923]: I0321 04:38:09.073297 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-nxh74"] Mar 21 04:38:09 crc kubenswrapper[4923]: I0321 04:38:09.073523 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-nxh74" podUID="56654110-f2bb-459d-abca-fc6b10f769b4" containerName="registry-server" containerID="cri-o://78d1c5e71dda6cef5f1b266eb8f7638cc8b08514533c426ac7a7a8cf52235de0" gracePeriod=30 Mar 21 04:38:09 crc kubenswrapper[4923]: I0321 04:38:09.101784 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6"] Mar 21 04:38:09 crc kubenswrapper[4923]: I0321 04:38:09.106428 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/7e3b51b6a8afd7c782c20eee88f3ff29a8534df38096cf089dfa437de6zbzf6"] Mar 21 04:38:09 crc kubenswrapper[4923]: I0321 04:38:09.132058 4923 generic.go:334] "Generic (PLEG): container finished" podID="892d8b77-92f8-489f-854d-fcbb4ce80dae" containerID="802a4ef752e5e78dc3f5fe8af54cb84e0e873b22089bd982f50b7fcce30b1bf5" exitCode=0 Mar 21 04:38:09 crc kubenswrapper[4923]: I0321 04:38:09.132096 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6868c4d546-xvgzg" event={"ID":"892d8b77-92f8-489f-854d-fcbb4ce80dae","Type":"ContainerDied","Data":"802a4ef752e5e78dc3f5fe8af54cb84e0e873b22089bd982f50b7fcce30b1bf5"} Mar 21 04:38:09 crc kubenswrapper[4923]: I0321 04:38:09.225999 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6868c4d546-xvgzg" Mar 21 04:38:09 crc kubenswrapper[4923]: I0321 04:38:09.368253 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/892d8b77-92f8-489f-854d-fcbb4ce80dae-webhook-cert\") pod \"892d8b77-92f8-489f-854d-fcbb4ce80dae\" (UID: \"892d8b77-92f8-489f-854d-fcbb4ce80dae\") " Mar 21 04:38:09 crc kubenswrapper[4923]: I0321 04:38:09.368291 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/892d8b77-92f8-489f-854d-fcbb4ce80dae-apiservice-cert\") pod \"892d8b77-92f8-489f-854d-fcbb4ce80dae\" (UID: \"892d8b77-92f8-489f-854d-fcbb4ce80dae\") " Mar 21 04:38:09 crc kubenswrapper[4923]: I0321 04:38:09.368407 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wgmb\" (UniqueName: \"kubernetes.io/projected/892d8b77-92f8-489f-854d-fcbb4ce80dae-kube-api-access-4wgmb\") pod \"892d8b77-92f8-489f-854d-fcbb4ce80dae\" (UID: \"892d8b77-92f8-489f-854d-fcbb4ce80dae\") " Mar 21 04:38:09 crc kubenswrapper[4923]: I0321 04:38:09.371774 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/892d8b77-92f8-489f-854d-fcbb4ce80dae-kube-api-access-4wgmb" (OuterVolumeSpecName: "kube-api-access-4wgmb") pod "892d8b77-92f8-489f-854d-fcbb4ce80dae" (UID: "892d8b77-92f8-489f-854d-fcbb4ce80dae"). InnerVolumeSpecName "kube-api-access-4wgmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:38:09 crc kubenswrapper[4923]: I0321 04:38:09.382240 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/892d8b77-92f8-489f-854d-fcbb4ce80dae-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "892d8b77-92f8-489f-854d-fcbb4ce80dae" (UID: "892d8b77-92f8-489f-854d-fcbb4ce80dae"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:38:09 crc kubenswrapper[4923]: I0321 04:38:09.382236 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/892d8b77-92f8-489f-854d-fcbb4ce80dae-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "892d8b77-92f8-489f-854d-fcbb4ce80dae" (UID: "892d8b77-92f8-489f-854d-fcbb4ce80dae"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:38:09 crc kubenswrapper[4923]: I0321 04:38:09.469688 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wgmb\" (UniqueName: \"kubernetes.io/projected/892d8b77-92f8-489f-854d-fcbb4ce80dae-kube-api-access-4wgmb\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:09 crc kubenswrapper[4923]: I0321 04:38:09.469725 4923 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/892d8b77-92f8-489f-854d-fcbb4ce80dae-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:09 crc kubenswrapper[4923]: I0321 04:38:09.469739 4923 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/892d8b77-92f8-489f-854d-fcbb4ce80dae-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:10 crc kubenswrapper[4923]: I0321 04:38:10.017030 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-nxh74" Mar 21 04:38:10 crc kubenswrapper[4923]: I0321 04:38:10.143676 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6868c4d546-xvgzg" Mar 21 04:38:10 crc kubenswrapper[4923]: I0321 04:38:10.143699 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6868c4d546-xvgzg" event={"ID":"892d8b77-92f8-489f-854d-fcbb4ce80dae","Type":"ContainerDied","Data":"03695cb67f695896af69bf694f768dd787c49f2ff46af656f089bbf6e86b01fd"} Mar 21 04:38:10 crc kubenswrapper[4923]: I0321 04:38:10.144571 4923 scope.go:117] "RemoveContainer" containerID="802a4ef752e5e78dc3f5fe8af54cb84e0e873b22089bd982f50b7fcce30b1bf5" Mar 21 04:38:10 crc kubenswrapper[4923]: I0321 04:38:10.146508 4923 generic.go:334] "Generic (PLEG): container finished" podID="56654110-f2bb-459d-abca-fc6b10f769b4" containerID="78d1c5e71dda6cef5f1b266eb8f7638cc8b08514533c426ac7a7a8cf52235de0" exitCode=0 Mar 21 04:38:10 crc kubenswrapper[4923]: I0321 04:38:10.146541 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-nxh74" event={"ID":"56654110-f2bb-459d-abca-fc6b10f769b4","Type":"ContainerDied","Data":"78d1c5e71dda6cef5f1b266eb8f7638cc8b08514533c426ac7a7a8cf52235de0"} Mar 21 04:38:10 crc kubenswrapper[4923]: I0321 04:38:10.146553 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-nxh74" Mar 21 04:38:10 crc kubenswrapper[4923]: I0321 04:38:10.146562 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-nxh74" event={"ID":"56654110-f2bb-459d-abca-fc6b10f769b4","Type":"ContainerDied","Data":"2b7ea746f1c71e2288d2cbebf4f0ff8f61b7cf41ffb98bd28b0119d06f2d35ea"} Mar 21 04:38:10 crc kubenswrapper[4923]: I0321 04:38:10.172859 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6868c4d546-xvgzg"] Mar 21 04:38:10 crc kubenswrapper[4923]: I0321 04:38:10.177704 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dr7w\" (UniqueName: \"kubernetes.io/projected/56654110-f2bb-459d-abca-fc6b10f769b4-kube-api-access-7dr7w\") pod \"56654110-f2bb-459d-abca-fc6b10f769b4\" (UID: \"56654110-f2bb-459d-abca-fc6b10f769b4\") " Mar 21 04:38:10 crc kubenswrapper[4923]: I0321 04:38:10.179622 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6868c4d546-xvgzg"] Mar 21 04:38:10 crc kubenswrapper[4923]: I0321 04:38:10.181100 4923 scope.go:117] "RemoveContainer" containerID="78d1c5e71dda6cef5f1b266eb8f7638cc8b08514533c426ac7a7a8cf52235de0" Mar 21 04:38:10 crc kubenswrapper[4923]: I0321 04:38:10.183184 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56654110-f2bb-459d-abca-fc6b10f769b4-kube-api-access-7dr7w" (OuterVolumeSpecName: "kube-api-access-7dr7w") pod "56654110-f2bb-459d-abca-fc6b10f769b4" (UID: "56654110-f2bb-459d-abca-fc6b10f769b4"). InnerVolumeSpecName "kube-api-access-7dr7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:38:10 crc kubenswrapper[4923]: I0321 04:38:10.222773 4923 scope.go:117] "RemoveContainer" containerID="78d1c5e71dda6cef5f1b266eb8f7638cc8b08514533c426ac7a7a8cf52235de0" Mar 21 04:38:10 crc kubenswrapper[4923]: E0321 04:38:10.223863 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78d1c5e71dda6cef5f1b266eb8f7638cc8b08514533c426ac7a7a8cf52235de0\": container with ID starting with 78d1c5e71dda6cef5f1b266eb8f7638cc8b08514533c426ac7a7a8cf52235de0 not found: ID does not exist" containerID="78d1c5e71dda6cef5f1b266eb8f7638cc8b08514533c426ac7a7a8cf52235de0" Mar 21 04:38:10 crc kubenswrapper[4923]: I0321 04:38:10.223923 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78d1c5e71dda6cef5f1b266eb8f7638cc8b08514533c426ac7a7a8cf52235de0"} err="failed to get container status \"78d1c5e71dda6cef5f1b266eb8f7638cc8b08514533c426ac7a7a8cf52235de0\": rpc error: code = NotFound desc = could not find container \"78d1c5e71dda6cef5f1b266eb8f7638cc8b08514533c426ac7a7a8cf52235de0\": container with ID starting with 78d1c5e71dda6cef5f1b266eb8f7638cc8b08514533c426ac7a7a8cf52235de0 not found: ID does not exist" Mar 21 04:38:10 crc kubenswrapper[4923]: I0321 04:38:10.279001 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dr7w\" (UniqueName: \"kubernetes.io/projected/56654110-f2bb-459d-abca-fc6b10f769b4-kube-api-access-7dr7w\") on node \"crc\" DevicePath \"\"" Mar 21 04:38:10 crc kubenswrapper[4923]: I0321 04:38:10.366261 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210b685d-8b03-4ccd-9b23-c0231434354c" path="/var/lib/kubelet/pods/210b685d-8b03-4ccd-9b23-c0231434354c/volumes" Mar 21 04:38:10 crc kubenswrapper[4923]: I0321 04:38:10.367216 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="892d8b77-92f8-489f-854d-fcbb4ce80dae" path="/var/lib/kubelet/pods/892d8b77-92f8-489f-854d-fcbb4ce80dae/volumes" Mar 21 04:38:10 crc kubenswrapper[4923]: I0321 04:38:10.468768 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-nxh74"] Mar 21 04:38:10 crc kubenswrapper[4923]: I0321 04:38:10.477298 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-nxh74"] Mar 21 04:38:12 crc kubenswrapper[4923]: I0321 04:38:12.370592 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56654110-f2bb-459d-abca-fc6b10f769b4" path="/var/lib/kubelet/pods/56654110-f2bb-459d-abca-fc6b10f769b4/volumes" Mar 21 04:38:17 crc kubenswrapper[4923]: I0321 04:38:17.662254 4923 scope.go:117] "RemoveContainer" containerID="c43d7984a1b80a0004c4cbb4cb90c1b6acd2e3edb49fc586c1c04157c4374f15" Mar 21 04:38:17 crc kubenswrapper[4923]: I0321 04:38:17.732170 4923 scope.go:117] "RemoveContainer" containerID="de37a24899fd452700acf6258fe8ed6ec020bc4e6869eae2fbe6d287be98bb3f" Mar 21 04:38:17 crc kubenswrapper[4923]: I0321 04:38:17.766436 4923 scope.go:117] "RemoveContainer" containerID="979b0a1c5b8cfce0c72d45862299ab9988c8912b219cc65e64713d60a716036e" Mar 21 04:38:17 crc kubenswrapper[4923]: I0321 04:38:17.804381 4923 scope.go:117] "RemoveContainer" containerID="6884b983e5c49af9978a5e9defc499f455a479cb072f1d8185d0ffbd3564f9d9" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.132410 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rqglt/must-gather-hmr8p"] Mar 21 04:38:23 crc kubenswrapper[4923]: E0321 04:38:23.133065 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c56c4706-b4d1-4fe7-bf97-7328684b55e0" containerName="operator" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133084 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="c56c4706-b4d1-4fe7-bf97-7328684b55e0" containerName="operator" Mar 21 04:38:23 crc kubenswrapper[4923]: E0321 04:38:23.133102 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9701db56-65b1-4cee-8942-69fc9cc4e7b8" containerName="setup-container" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133113 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="9701db56-65b1-4cee-8942-69fc9cc4e7b8" containerName="setup-container" Mar 21 04:38:23 crc kubenswrapper[4923]: E0321 04:38:23.133134 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3134e8ba-8705-4780-aa7f-5a644e3949ee" containerName="oc" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133146 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="3134e8ba-8705-4780-aa7f-5a644e3949ee" containerName="oc" Mar 21 04:38:23 crc kubenswrapper[4923]: E0321 04:38:23.133165 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d" containerName="galera" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133176 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d" containerName="galera" Mar 21 04:38:23 crc kubenswrapper[4923]: E0321 04:38:23.133193 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="862bd87a-5130-41e2-a883-ac803db8df3b" containerName="registry-server" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133206 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="862bd87a-5130-41e2-a883-ac803db8df3b" containerName="registry-server" Mar 21 04:38:23 crc kubenswrapper[4923]: E0321 04:38:23.133224 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9701db56-65b1-4cee-8942-69fc9cc4e7b8" containerName="rabbitmq" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133235 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="9701db56-65b1-4cee-8942-69fc9cc4e7b8" containerName="rabbitmq" Mar 21 04:38:23 crc kubenswrapper[4923]: E0321 04:38:23.133247 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80d61db4-20c9-4ea6-a8d7-435f200abfc4" containerName="mariadb-account-delete" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133258 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="80d61db4-20c9-4ea6-a8d7-435f200abfc4" containerName="mariadb-account-delete" Mar 21 04:38:23 crc kubenswrapper[4923]: E0321 04:38:23.133270 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80d61db4-20c9-4ea6-a8d7-435f200abfc4" containerName="mariadb-account-delete" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133281 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="80d61db4-20c9-4ea6-a8d7-435f200abfc4" containerName="mariadb-account-delete" Mar 21 04:38:23 crc kubenswrapper[4923]: E0321 04:38:23.133294 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25ef91ae-d440-42d3-b259-8eacbb269ee0" containerName="keystone-api" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133305 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="25ef91ae-d440-42d3-b259-8eacbb269ee0" containerName="keystone-api" Mar 21 04:38:23 crc kubenswrapper[4923]: E0321 04:38:23.133341 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0" containerName="mysql-bootstrap" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133353 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0" containerName="mysql-bootstrap" Mar 21 04:38:23 crc kubenswrapper[4923]: E0321 04:38:23.133371 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0" containerName="galera" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133382 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0" containerName="galera" Mar 21 04:38:23 crc kubenswrapper[4923]: E0321 04:38:23.133398 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d" containerName="mysql-bootstrap" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133409 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d" containerName="mysql-bootstrap" Mar 21 04:38:23 crc kubenswrapper[4923]: E0321 04:38:23.133424 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19d00725-90e1-4ad2-93b2-b5e71145cd2c" containerName="manager" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133435 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="19d00725-90e1-4ad2-93b2-b5e71145cd2c" containerName="manager" Mar 21 04:38:23 crc kubenswrapper[4923]: E0321 04:38:23.133455 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9be365be-2d90-4b60-88e4-db66f0d6192f" containerName="manager" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133466 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="9be365be-2d90-4b60-88e4-db66f0d6192f" containerName="manager" Mar 21 04:38:23 crc kubenswrapper[4923]: E0321 04:38:23.133482 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9faba93f-83d8-45ad-bea2-44e730e7f3a4" containerName="registry-server" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133492 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="9faba93f-83d8-45ad-bea2-44e730e7f3a4" containerName="registry-server" Mar 21 04:38:23 crc kubenswrapper[4923]: E0321 04:38:23.133505 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85d3f09f-876d-45c0-8631-5df235d5429e" containerName="registry-server" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133516 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="85d3f09f-876d-45c0-8631-5df235d5429e" containerName="registry-server" Mar 21 04:38:23 crc kubenswrapper[4923]: E0321 04:38:23.133532 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="892d8b77-92f8-489f-854d-fcbb4ce80dae" containerName="manager" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133543 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="892d8b77-92f8-489f-854d-fcbb4ce80dae" containerName="manager" Mar 21 04:38:23 crc kubenswrapper[4923]: E0321 04:38:23.133559 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa331624-aac7-4683-abe0-7e0f37b4b121" containerName="registry-server" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133569 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa331624-aac7-4683-abe0-7e0f37b4b121" containerName="registry-server" Mar 21 04:38:23 crc kubenswrapper[4923]: E0321 04:38:23.133581 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56654110-f2bb-459d-abca-fc6b10f769b4" containerName="registry-server" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133592 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="56654110-f2bb-459d-abca-fc6b10f769b4" containerName="registry-server" Mar 21 04:38:23 crc kubenswrapper[4923]: E0321 04:38:23.133611 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f2119f-aa10-497e-bc3b-547681df5eb4" containerName="memcached" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133624 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f2119f-aa10-497e-bc3b-547681df5eb4" containerName="memcached" Mar 21 04:38:23 crc kubenswrapper[4923]: E0321 04:38:23.133643 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b57d2b83-885a-44b0-b334-f0dd96568ba9" containerName="manager" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133654 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="b57d2b83-885a-44b0-b334-f0dd96568ba9" containerName="manager" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133863 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ba6d1e7-e4e3-4857-a4b3-5ed2265343c0" containerName="galera" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133881 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="80d61db4-20c9-4ea6-a8d7-435f200abfc4" containerName="mariadb-account-delete" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133897 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="19d00725-90e1-4ad2-93b2-b5e71145cd2c" containerName="manager" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133910 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="80d61db4-20c9-4ea6-a8d7-435f200abfc4" containerName="mariadb-account-delete" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133927 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="c56c4706-b4d1-4fe7-bf97-7328684b55e0" containerName="operator" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133945 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="862bd87a-5130-41e2-a883-ac803db8df3b" containerName="registry-server" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133959 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="9701db56-65b1-4cee-8942-69fc9cc4e7b8" containerName="rabbitmq" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133976 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="3134e8ba-8705-4780-aa7f-5a644e3949ee" containerName="oc" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.133990 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="892d8b77-92f8-489f-854d-fcbb4ce80dae" containerName="manager" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.134007 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa331624-aac7-4683-abe0-7e0f37b4b121" containerName="registry-server" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.134021 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f7fa4a2-4b42-4f5d-85d3-3d40e54e6b0d" containerName="galera" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.134035 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="09f2119f-aa10-497e-bc3b-547681df5eb4" containerName="memcached" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.134049 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="56654110-f2bb-459d-abca-fc6b10f769b4" containerName="registry-server" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.134067 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="b57d2b83-885a-44b0-b334-f0dd96568ba9" containerName="manager" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.134083 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="9faba93f-83d8-45ad-bea2-44e730e7f3a4" containerName="registry-server" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.134095 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="25ef91ae-d440-42d3-b259-8eacbb269ee0" containerName="keystone-api" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.134107 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="9be365be-2d90-4b60-88e4-db66f0d6192f" containerName="manager" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.134121 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="85d3f09f-876d-45c0-8631-5df235d5429e" containerName="registry-server" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.135071 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rqglt/must-gather-hmr8p" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.144597 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rqglt"/"openshift-service-ca.crt" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.145589 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rqglt"/"kube-root-ca.crt" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.160352 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rqglt/must-gather-hmr8p"] Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.187887 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfgj6\" (UniqueName: \"kubernetes.io/projected/e1fa0eab-c10b-4c9d-afae-9a19cad9c996-kube-api-access-hfgj6\") pod \"must-gather-hmr8p\" (UID: \"e1fa0eab-c10b-4c9d-afae-9a19cad9c996\") " pod="openshift-must-gather-rqglt/must-gather-hmr8p" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.187956 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e1fa0eab-c10b-4c9d-afae-9a19cad9c996-must-gather-output\") pod \"must-gather-hmr8p\" (UID: \"e1fa0eab-c10b-4c9d-afae-9a19cad9c996\") " pod="openshift-must-gather-rqglt/must-gather-hmr8p" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.289747 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfgj6\" (UniqueName: \"kubernetes.io/projected/e1fa0eab-c10b-4c9d-afae-9a19cad9c996-kube-api-access-hfgj6\") pod \"must-gather-hmr8p\" (UID: \"e1fa0eab-c10b-4c9d-afae-9a19cad9c996\") " pod="openshift-must-gather-rqglt/must-gather-hmr8p" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.289781 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e1fa0eab-c10b-4c9d-afae-9a19cad9c996-must-gather-output\") pod \"must-gather-hmr8p\" (UID: \"e1fa0eab-c10b-4c9d-afae-9a19cad9c996\") " pod="openshift-must-gather-rqglt/must-gather-hmr8p" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.290235 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e1fa0eab-c10b-4c9d-afae-9a19cad9c996-must-gather-output\") pod \"must-gather-hmr8p\" (UID: \"e1fa0eab-c10b-4c9d-afae-9a19cad9c996\") " pod="openshift-must-gather-rqglt/must-gather-hmr8p" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.322179 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfgj6\" (UniqueName: \"kubernetes.io/projected/e1fa0eab-c10b-4c9d-afae-9a19cad9c996-kube-api-access-hfgj6\") pod \"must-gather-hmr8p\" (UID: \"e1fa0eab-c10b-4c9d-afae-9a19cad9c996\") " pod="openshift-must-gather-rqglt/must-gather-hmr8p" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.457433 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rqglt/must-gather-hmr8p" Mar 21 04:38:23 crc kubenswrapper[4923]: I0321 04:38:23.902047 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rqglt/must-gather-hmr8p"] Mar 21 04:38:24 crc kubenswrapper[4923]: I0321 04:38:24.283145 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rqglt/must-gather-hmr8p" event={"ID":"e1fa0eab-c10b-4c9d-afae-9a19cad9c996","Type":"ContainerStarted","Data":"44345ae3e7054300512432555cb699b62c45b587e615554dc822752cae545515"} Mar 21 04:38:28 crc kubenswrapper[4923]: I0321 04:38:28.317228 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rqglt/must-gather-hmr8p" event={"ID":"e1fa0eab-c10b-4c9d-afae-9a19cad9c996","Type":"ContainerStarted","Data":"510165930c1aea2bc34aad4f36fae0c100158caa6bcbcb3b225ecc8933bc00f2"} Mar 21 04:38:28 crc kubenswrapper[4923]: I0321 04:38:28.317613 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rqglt/must-gather-hmr8p" event={"ID":"e1fa0eab-c10b-4c9d-afae-9a19cad9c996","Type":"ContainerStarted","Data":"d4fe6ebe029653d86714e7f4f856b2f4b6c3dd7d6c73d9d2775ee3985bc92c91"} Mar 21 04:38:28 crc kubenswrapper[4923]: I0321 04:38:28.338714 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rqglt/must-gather-hmr8p" podStartSLOduration=1.592673652 podStartE2EDuration="5.338686322s" podCreationTimestamp="2026-03-21 04:38:23 +0000 UTC" firstStartedPulling="2026-03-21 04:38:23.911663469 +0000 UTC m=+1269.064674566" lastFinishedPulling="2026-03-21 04:38:27.657676159 +0000 UTC m=+1272.810687236" observedRunningTime="2026-03-21 04:38:28.33507578 +0000 UTC m=+1273.488086917" watchObservedRunningTime="2026-03-21 04:38:28.338686322 +0000 UTC m=+1273.491697449" Mar 21 04:39:03 crc kubenswrapper[4923]: I0321 04:39:03.235471 4923 patch_prober.go:28] interesting pod/machine-config-daemon-cv5gr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 04:39:03 crc kubenswrapper[4923]: I0321 04:39:03.236418 4923 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 04:39:18 crc kubenswrapper[4923]: I0321 04:39:18.149176 4923 scope.go:117] "RemoveContainer" containerID="8a23cca9f8e8fb84e508324780dc96a0d68bb7138af81924407226d9ed8282d0" Mar 21 04:39:18 crc kubenswrapper[4923]: I0321 04:39:18.176958 4923 scope.go:117] "RemoveContainer" containerID="d7311cd62fe728871b73d36a083236fa6f7b91b4f53671bce1293a4c0472f64a" Mar 21 04:39:18 crc kubenswrapper[4923]: I0321 04:39:18.204110 4923 scope.go:117] "RemoveContainer" containerID="98593327340a2458fd4716274cc3c47e84a99ec10498a7df35fe3aeba758f51a" Mar 21 04:39:18 crc kubenswrapper[4923]: I0321 04:39:18.234020 4923 scope.go:117] "RemoveContainer" containerID="cd3ab49393764b1a2f849f5ffff1dec3f075c6433e53262099d09271e67f3edd" Mar 21 04:39:20 crc kubenswrapper[4923]: I0321 04:39:20.833515 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-rwpdp_b5571552-4369-46f6-ad29-a54b1f4a7a8f/control-plane-machine-set-operator/0.log" Mar 21 04:39:20 crc kubenswrapper[4923]: I0321 04:39:20.975103 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-22lp8_baaa32c9-702b-4a43-a7b7-7a98272f80f3/kube-rbac-proxy/0.log" Mar 21 04:39:21 crc kubenswrapper[4923]: I0321 04:39:21.013695 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-22lp8_baaa32c9-702b-4a43-a7b7-7a98272f80f3/machine-api-operator/0.log" Mar 21 04:39:33 crc kubenswrapper[4923]: I0321 04:39:33.235965 4923 patch_prober.go:28] interesting pod/machine-config-daemon-cv5gr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 04:39:33 crc kubenswrapper[4923]: I0321 04:39:33.238534 4923 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 04:39:51 crc kubenswrapper[4923]: I0321 04:39:51.835197 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-748tf_cc875221-d66d-43a1-83ab-42059357491d/kube-rbac-proxy/0.log" Mar 21 04:39:51 crc kubenswrapper[4923]: I0321 04:39:51.927490 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-748tf_cc875221-d66d-43a1-83ab-42059357491d/controller/0.log" Mar 21 04:39:52 crc kubenswrapper[4923]: I0321 04:39:52.326761 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/cp-frr-files/0.log" Mar 21 04:39:52 crc kubenswrapper[4923]: I0321 04:39:52.774283 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/cp-frr-files/0.log" Mar 21 04:39:52 crc kubenswrapper[4923]: I0321 04:39:52.795380 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/cp-reloader/0.log" Mar 21 04:39:52 crc kubenswrapper[4923]: I0321 04:39:52.816742 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/cp-reloader/0.log" Mar 21 04:39:52 crc kubenswrapper[4923]: I0321 04:39:52.817257 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/cp-metrics/0.log" Mar 21 04:39:52 crc kubenswrapper[4923]: I0321 04:39:52.973810 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/cp-frr-files/0.log" Mar 21 04:39:52 crc kubenswrapper[4923]: I0321 04:39:52.987083 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/cp-metrics/0.log" Mar 21 04:39:52 crc kubenswrapper[4923]: I0321 04:39:52.996189 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/cp-reloader/0.log" Mar 21 04:39:53 crc kubenswrapper[4923]: I0321 04:39:53.011071 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/cp-metrics/0.log" Mar 21 04:39:53 crc kubenswrapper[4923]: I0321 04:39:53.160198 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/cp-metrics/0.log" Mar 21 04:39:53 crc kubenswrapper[4923]: I0321 04:39:53.172865 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/controller/0.log" Mar 21 04:39:53 crc kubenswrapper[4923]: I0321 04:39:53.178796 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/cp-reloader/0.log" Mar 21 04:39:53 crc kubenswrapper[4923]: I0321 04:39:53.189350 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/cp-frr-files/0.log" Mar 21 04:39:53 crc kubenswrapper[4923]: I0321 04:39:53.341998 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/frr-metrics/0.log" Mar 21 04:39:53 crc kubenswrapper[4923]: I0321 04:39:53.352765 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/kube-rbac-proxy/0.log" Mar 21 04:39:53 crc kubenswrapper[4923]: I0321 04:39:53.396546 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/kube-rbac-proxy-frr/0.log" Mar 21 04:39:53 crc kubenswrapper[4923]: I0321 04:39:53.539001 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/reloader/0.log" Mar 21 04:39:53 crc kubenswrapper[4923]: I0321 04:39:53.543983 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-6t644_a9dca852-085f-4e4a-9ded-ffb15aada6cb/frr-k8s-webhook-server/0.log" Mar 21 04:39:53 crc kubenswrapper[4923]: I0321 04:39:53.670175 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/frr/0.log" Mar 21 04:39:53 crc kubenswrapper[4923]: I0321 04:39:53.727347 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-fd8f45f-g4r57_99dd2fb0-56d7-40c7-836f-2f004f9dc676/manager/0.log" Mar 21 04:39:53 crc kubenswrapper[4923]: I0321 04:39:53.886237 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5b974f9ffb-m2lc4_ac4f10a7-8f47-40e1-9ca2-6f401c588c64/webhook-server/0.log" Mar 21 04:39:53 crc kubenswrapper[4923]: I0321 04:39:53.936490 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rw9p8_8f2886c8-0371-44b8-b2bc-59dfd3a193f6/kube-rbac-proxy/0.log" Mar 21 04:39:54 crc kubenswrapper[4923]: I0321 04:39:54.067651 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rw9p8_8f2886c8-0371-44b8-b2bc-59dfd3a193f6/speaker/0.log" Mar 21 04:40:00 crc kubenswrapper[4923]: I0321 04:40:00.128543 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567800-tgmnk"] Mar 21 04:40:00 crc kubenswrapper[4923]: I0321 04:40:00.129540 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567800-tgmnk" Mar 21 04:40:00 crc kubenswrapper[4923]: I0321 04:40:00.131479 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 04:40:00 crc kubenswrapper[4923]: I0321 04:40:00.131541 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8trfl" Mar 21 04:40:00 crc kubenswrapper[4923]: I0321 04:40:00.131578 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 04:40:00 crc kubenswrapper[4923]: I0321 04:40:00.141951 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567800-tgmnk"] Mar 21 04:40:00 crc kubenswrapper[4923]: I0321 04:40:00.253028 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbmqs\" (UniqueName: \"kubernetes.io/projected/a54b1a37-c2a5-4bfc-a2e2-349ddb2ab559-kube-api-access-bbmqs\") pod \"auto-csr-approver-29567800-tgmnk\" (UID: \"a54b1a37-c2a5-4bfc-a2e2-349ddb2ab559\") " pod="openshift-infra/auto-csr-approver-29567800-tgmnk" Mar 21 04:40:00 crc kubenswrapper[4923]: I0321 04:40:00.354927 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbmqs\" (UniqueName: \"kubernetes.io/projected/a54b1a37-c2a5-4bfc-a2e2-349ddb2ab559-kube-api-access-bbmqs\") pod \"auto-csr-approver-29567800-tgmnk\" (UID: \"a54b1a37-c2a5-4bfc-a2e2-349ddb2ab559\") " pod="openshift-infra/auto-csr-approver-29567800-tgmnk" Mar 21 04:40:00 crc kubenswrapper[4923]: I0321 04:40:00.373626 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbmqs\" (UniqueName: \"kubernetes.io/projected/a54b1a37-c2a5-4bfc-a2e2-349ddb2ab559-kube-api-access-bbmqs\") pod \"auto-csr-approver-29567800-tgmnk\" (UID: \"a54b1a37-c2a5-4bfc-a2e2-349ddb2ab559\") " pod="openshift-infra/auto-csr-approver-29567800-tgmnk" Mar 21 04:40:00 crc kubenswrapper[4923]: I0321 04:40:00.483052 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567800-tgmnk" Mar 21 04:40:00 crc kubenswrapper[4923]: I0321 04:40:00.925905 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567800-tgmnk"] Mar 21 04:40:00 crc kubenswrapper[4923]: I0321 04:40:00.955250 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567800-tgmnk" event={"ID":"a54b1a37-c2a5-4bfc-a2e2-349ddb2ab559","Type":"ContainerStarted","Data":"a581bc664bda9c259a0a4224ae298aa57b84a65e8ee6e1f3c3b37ab45eea55eb"} Mar 21 04:40:02 crc kubenswrapper[4923]: I0321 04:40:02.975419 4923 generic.go:334] "Generic (PLEG): container finished" podID="a54b1a37-c2a5-4bfc-a2e2-349ddb2ab559" containerID="da81ca060dfc40a6f45f2390b567dc6f26309950f9cc7440daf0e07dd0a1d45a" exitCode=0 Mar 21 04:40:02 crc kubenswrapper[4923]: I0321 04:40:02.975544 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567800-tgmnk" event={"ID":"a54b1a37-c2a5-4bfc-a2e2-349ddb2ab559","Type":"ContainerDied","Data":"da81ca060dfc40a6f45f2390b567dc6f26309950f9cc7440daf0e07dd0a1d45a"} Mar 21 04:40:03 crc kubenswrapper[4923]: I0321 04:40:03.236383 4923 patch_prober.go:28] interesting pod/machine-config-daemon-cv5gr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 04:40:03 crc kubenswrapper[4923]: I0321 04:40:03.236456 4923 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 04:40:03 crc kubenswrapper[4923]: I0321 04:40:03.236508 4923 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" Mar 21 04:40:03 crc kubenswrapper[4923]: I0321 04:40:03.237098 4923 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe257518d39f35ad4d416456034fb8a1f94f5ac481d8616c2afbf6e356e0b8a6"} pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 21 04:40:03 crc kubenswrapper[4923]: I0321 04:40:03.237164 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" containerID="cri-o://fe257518d39f35ad4d416456034fb8a1f94f5ac481d8616c2afbf6e356e0b8a6" gracePeriod=600 Mar 21 04:40:03 crc kubenswrapper[4923]: I0321 04:40:03.984798 4923 generic.go:334] "Generic (PLEG): container finished" podID="34cdf206-b121-415c-ae40-21245192e724" containerID="fe257518d39f35ad4d416456034fb8a1f94f5ac481d8616c2afbf6e356e0b8a6" exitCode=0 Mar 21 04:40:03 crc kubenswrapper[4923]: I0321 04:40:03.984844 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" event={"ID":"34cdf206-b121-415c-ae40-21245192e724","Type":"ContainerDied","Data":"fe257518d39f35ad4d416456034fb8a1f94f5ac481d8616c2afbf6e356e0b8a6"} Mar 21 04:40:03 crc kubenswrapper[4923]: I0321 04:40:03.985241 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" event={"ID":"34cdf206-b121-415c-ae40-21245192e724","Type":"ContainerStarted","Data":"1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04"} Mar 21 04:40:03 crc kubenswrapper[4923]: I0321 04:40:03.985262 4923 scope.go:117] "RemoveContainer" containerID="ed57872de3a4ec46ca622220eae1ec6afcee8d851a5ca3d72e36b3f3c20665be" Mar 21 04:40:04 crc kubenswrapper[4923]: I0321 04:40:04.239240 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567800-tgmnk" Mar 21 04:40:04 crc kubenswrapper[4923]: I0321 04:40:04.405006 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbmqs\" (UniqueName: \"kubernetes.io/projected/a54b1a37-c2a5-4bfc-a2e2-349ddb2ab559-kube-api-access-bbmqs\") pod \"a54b1a37-c2a5-4bfc-a2e2-349ddb2ab559\" (UID: \"a54b1a37-c2a5-4bfc-a2e2-349ddb2ab559\") " Mar 21 04:40:04 crc kubenswrapper[4923]: I0321 04:40:04.415466 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a54b1a37-c2a5-4bfc-a2e2-349ddb2ab559-kube-api-access-bbmqs" (OuterVolumeSpecName: "kube-api-access-bbmqs") pod "a54b1a37-c2a5-4bfc-a2e2-349ddb2ab559" (UID: "a54b1a37-c2a5-4bfc-a2e2-349ddb2ab559"). InnerVolumeSpecName "kube-api-access-bbmqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:40:04 crc kubenswrapper[4923]: I0321 04:40:04.506278 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbmqs\" (UniqueName: \"kubernetes.io/projected/a54b1a37-c2a5-4bfc-a2e2-349ddb2ab559-kube-api-access-bbmqs\") on node \"crc\" DevicePath \"\"" Mar 21 04:40:05 crc kubenswrapper[4923]: I0321 04:40:05.002528 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567800-tgmnk" event={"ID":"a54b1a37-c2a5-4bfc-a2e2-349ddb2ab559","Type":"ContainerDied","Data":"a581bc664bda9c259a0a4224ae298aa57b84a65e8ee6e1f3c3b37ab45eea55eb"} Mar 21 04:40:05 crc kubenswrapper[4923]: I0321 04:40:05.002864 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a581bc664bda9c259a0a4224ae298aa57b84a65e8ee6e1f3c3b37ab45eea55eb" Mar 21 04:40:05 crc kubenswrapper[4923]: I0321 04:40:05.002620 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567800-tgmnk" Mar 21 04:40:05 crc kubenswrapper[4923]: I0321 04:40:05.341372 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567794-8gk59"] Mar 21 04:40:05 crc kubenswrapper[4923]: I0321 04:40:05.345119 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567794-8gk59"] Mar 21 04:40:06 crc kubenswrapper[4923]: I0321 04:40:06.365748 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1023ab82-a41a-4fa0-a0fb-f19cf7ed3716" path="/var/lib/kubelet/pods/1023ab82-a41a-4fa0-a0fb-f19cf7ed3716/volumes" Mar 21 04:40:18 crc kubenswrapper[4923]: I0321 04:40:18.291680 4923 scope.go:117] "RemoveContainer" containerID="54f6207f84f560223dfe6d12a885f3ad3b474c0993c4dd3808948e8450d3d518" Mar 21 04:40:18 crc kubenswrapper[4923]: I0321 04:40:18.346386 4923 scope.go:117] "RemoveContainer" containerID="04c56c12c42893bbbd5f42cf72b1d50e401e494d67c3ae3c8fa72418cfab8d0d" Mar 21 04:40:18 crc kubenswrapper[4923]: I0321 04:40:18.376272 4923 scope.go:117] "RemoveContainer" containerID="913c86b642b50c0db8f2db51fcdbaabe836b66b378f8b08c12323cdbde872502" Mar 21 04:40:18 crc kubenswrapper[4923]: I0321 04:40:18.424019 4923 scope.go:117] "RemoveContainer" containerID="3ab92e81bb94d39375d057ca2450621f5934bc015470ebbc8fb8c77165bdcec3" Mar 21 04:40:21 crc kubenswrapper[4923]: I0321 04:40:21.418871 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9_5079aa2d-ce9f-4e98-bc7b-48fcb327a98f/util/0.log" Mar 21 04:40:21 crc kubenswrapper[4923]: I0321 04:40:21.661649 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9_5079aa2d-ce9f-4e98-bc7b-48fcb327a98f/util/0.log" Mar 21 04:40:21 crc kubenswrapper[4923]: I0321 04:40:21.668068 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9_5079aa2d-ce9f-4e98-bc7b-48fcb327a98f/pull/0.log" Mar 21 04:40:21 crc kubenswrapper[4923]: I0321 04:40:21.721928 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9_5079aa2d-ce9f-4e98-bc7b-48fcb327a98f/pull/0.log" Mar 21 04:40:21 crc kubenswrapper[4923]: I0321 04:40:21.874267 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9_5079aa2d-ce9f-4e98-bc7b-48fcb327a98f/util/0.log" Mar 21 04:40:21 crc kubenswrapper[4923]: I0321 04:40:21.878790 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9_5079aa2d-ce9f-4e98-bc7b-48fcb327a98f/extract/0.log" Mar 21 04:40:21 crc kubenswrapper[4923]: I0321 04:40:21.894863 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9_5079aa2d-ce9f-4e98-bc7b-48fcb327a98f/pull/0.log" Mar 21 04:40:22 crc kubenswrapper[4923]: I0321 04:40:22.043746 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tcvnp_00194538-9e59-4093-b0a7-be2801bcef80/extract-utilities/0.log" Mar 21 04:40:22 crc kubenswrapper[4923]: I0321 04:40:22.213599 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tcvnp_00194538-9e59-4093-b0a7-be2801bcef80/extract-content/0.log" Mar 21 04:40:22 crc kubenswrapper[4923]: I0321 04:40:22.224821 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tcvnp_00194538-9e59-4093-b0a7-be2801bcef80/extract-utilities/0.log" Mar 21 04:40:22 crc kubenswrapper[4923]: I0321 04:40:22.245369 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tcvnp_00194538-9e59-4093-b0a7-be2801bcef80/extract-content/0.log" Mar 21 04:40:22 crc kubenswrapper[4923]: I0321 04:40:22.410616 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tcvnp_00194538-9e59-4093-b0a7-be2801bcef80/extract-content/0.log" Mar 21 04:40:22 crc kubenswrapper[4923]: I0321 04:40:22.421909 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tcvnp_00194538-9e59-4093-b0a7-be2801bcef80/extract-utilities/0.log" Mar 21 04:40:22 crc kubenswrapper[4923]: I0321 04:40:22.618836 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ckfp6_db0b017b-07ab-4c9a-b9ae-2111b970e1fe/extract-utilities/0.log" Mar 21 04:40:22 crc kubenswrapper[4923]: I0321 04:40:22.687012 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tcvnp_00194538-9e59-4093-b0a7-be2801bcef80/registry-server/0.log" Mar 21 04:40:22 crc kubenswrapper[4923]: I0321 04:40:22.764345 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ckfp6_db0b017b-07ab-4c9a-b9ae-2111b970e1fe/extract-utilities/0.log" Mar 21 04:40:22 crc kubenswrapper[4923]: I0321 04:40:22.764438 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ckfp6_db0b017b-07ab-4c9a-b9ae-2111b970e1fe/extract-content/0.log" Mar 21 04:40:22 crc kubenswrapper[4923]: I0321 04:40:22.799658 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ckfp6_db0b017b-07ab-4c9a-b9ae-2111b970e1fe/extract-content/0.log" Mar 21 04:40:22 crc kubenswrapper[4923]: I0321 04:40:22.978465 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ckfp6_db0b017b-07ab-4c9a-b9ae-2111b970e1fe/extract-utilities/0.log" Mar 21 04:40:22 crc kubenswrapper[4923]: I0321 04:40:22.990809 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ckfp6_db0b017b-07ab-4c9a-b9ae-2111b970e1fe/extract-content/0.log" Mar 21 04:40:23 crc kubenswrapper[4923]: I0321 04:40:23.154735 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-nj2n7_bba19ab1-fbf2-4a6f-a481-45e06896f9cd/marketplace-operator/0.log" Mar 21 04:40:23 crc kubenswrapper[4923]: I0321 04:40:23.296524 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9rmdl_c9c3cd1d-3b39-4990-9cfc-bbadb41f837e/extract-utilities/0.log" Mar 21 04:40:23 crc kubenswrapper[4923]: I0321 04:40:23.389843 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ckfp6_db0b017b-07ab-4c9a-b9ae-2111b970e1fe/registry-server/0.log" Mar 21 04:40:23 crc kubenswrapper[4923]: I0321 04:40:23.391448 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9rmdl_c9c3cd1d-3b39-4990-9cfc-bbadb41f837e/extract-utilities/0.log" Mar 21 04:40:23 crc kubenswrapper[4923]: I0321 04:40:23.418925 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9rmdl_c9c3cd1d-3b39-4990-9cfc-bbadb41f837e/extract-content/0.log" Mar 21 04:40:23 crc kubenswrapper[4923]: I0321 04:40:23.506115 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9rmdl_c9c3cd1d-3b39-4990-9cfc-bbadb41f837e/extract-content/0.log" Mar 21 04:40:23 crc kubenswrapper[4923]: I0321 04:40:23.634986 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9rmdl_c9c3cd1d-3b39-4990-9cfc-bbadb41f837e/extract-content/0.log" Mar 21 04:40:23 crc kubenswrapper[4923]: I0321 04:40:23.635151 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9rmdl_c9c3cd1d-3b39-4990-9cfc-bbadb41f837e/extract-utilities/0.log" Mar 21 04:40:23 crc kubenswrapper[4923]: I0321 04:40:23.760083 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9rmdl_c9c3cd1d-3b39-4990-9cfc-bbadb41f837e/registry-server/0.log" Mar 21 04:40:23 crc kubenswrapper[4923]: I0321 04:40:23.788968 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2s8wr_d20826ac-d354-4e18-ba8e-affcf49ed187/extract-utilities/0.log" Mar 21 04:40:23 crc kubenswrapper[4923]: I0321 04:40:23.987840 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2s8wr_d20826ac-d354-4e18-ba8e-affcf49ed187/extract-utilities/0.log" Mar 21 04:40:24 crc kubenswrapper[4923]: I0321 04:40:24.011525 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2s8wr_d20826ac-d354-4e18-ba8e-affcf49ed187/extract-content/0.log" Mar 21 04:40:24 crc kubenswrapper[4923]: I0321 04:40:24.034373 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2s8wr_d20826ac-d354-4e18-ba8e-affcf49ed187/extract-content/0.log" Mar 21 04:40:24 crc kubenswrapper[4923]: I0321 04:40:24.182626 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2s8wr_d20826ac-d354-4e18-ba8e-affcf49ed187/extract-content/0.log" Mar 21 04:40:24 crc kubenswrapper[4923]: I0321 04:40:24.257892 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2s8wr_d20826ac-d354-4e18-ba8e-affcf49ed187/extract-utilities/0.log" Mar 21 04:40:24 crc kubenswrapper[4923]: I0321 04:40:24.488445 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2s8wr_d20826ac-d354-4e18-ba8e-affcf49ed187/registry-server/0.log" Mar 21 04:41:18 crc kubenswrapper[4923]: I0321 04:41:18.488788 4923 scope.go:117] "RemoveContainer" containerID="cd42bb74b94bd264bda475db0dfae54e95d48c2a3f421d2d45226c3b5c500e2d" Mar 21 04:41:18 crc kubenswrapper[4923]: I0321 04:41:18.537538 4923 scope.go:117] "RemoveContainer" containerID="0172b546f7be9c52e576ba33184e91ecff00885593896cc67d4627d03c9cf1a9" Mar 21 04:41:18 crc kubenswrapper[4923]: I0321 04:41:18.568790 4923 scope.go:117] "RemoveContainer" containerID="c4bf1cf47293984749e6e23a25a8cb14f47dc6e80ff86976ff74803995b68599" Mar 21 04:41:18 crc kubenswrapper[4923]: I0321 04:41:18.598409 4923 scope.go:117] "RemoveContainer" containerID="6cdb7fd1cbb173c99bd888a9db792f4f2cb6a03e23fa0907b8d62c37f2f939c6" Mar 21 04:41:18 crc kubenswrapper[4923]: I0321 04:41:18.633487 4923 scope.go:117] "RemoveContainer" containerID="291175e0c172462298c9695aac42064dfab9d8c34c6ae1dec9a3a4019fbfb480" Mar 21 04:41:18 crc kubenswrapper[4923]: I0321 04:41:18.659950 4923 scope.go:117] "RemoveContainer" containerID="b78f5de8b80224be97834c33098373ca3ac57f11a77b993ca92b69d9b81ac40c" Mar 21 04:41:18 crc kubenswrapper[4923]: I0321 04:41:18.679935 4923 scope.go:117] "RemoveContainer" containerID="7a7aa68afe6564412913a678196110a98c8732ad1c042075412e7a983181c145" Mar 21 04:41:18 crc kubenswrapper[4923]: I0321 04:41:18.695252 4923 scope.go:117] "RemoveContainer" containerID="0fe513b559ae0237ab405a0142663169cdf85eaf9192b5b44141e5d58391d49f" Mar 21 04:41:23 crc kubenswrapper[4923]: I0321 04:41:23.982790 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8pm62"] Mar 21 04:41:23 crc kubenswrapper[4923]: E0321 04:41:23.983560 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a54b1a37-c2a5-4bfc-a2e2-349ddb2ab559" containerName="oc" Mar 21 04:41:23 crc kubenswrapper[4923]: I0321 04:41:23.983582 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="a54b1a37-c2a5-4bfc-a2e2-349ddb2ab559" containerName="oc" Mar 21 04:41:23 crc kubenswrapper[4923]: I0321 04:41:23.983767 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="a54b1a37-c2a5-4bfc-a2e2-349ddb2ab559" containerName="oc" Mar 21 04:41:23 crc kubenswrapper[4923]: I0321 04:41:23.985129 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8pm62" Mar 21 04:41:24 crc kubenswrapper[4923]: I0321 04:41:24.002817 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8pm62"] Mar 21 04:41:24 crc kubenswrapper[4923]: I0321 04:41:24.149611 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75cn9\" (UniqueName: \"kubernetes.io/projected/194aa099-ed46-4df0-8be9-01b7e35db642-kube-api-access-75cn9\") pod \"redhat-operators-8pm62\" (UID: \"194aa099-ed46-4df0-8be9-01b7e35db642\") " pod="openshift-marketplace/redhat-operators-8pm62" Mar 21 04:41:24 crc kubenswrapper[4923]: I0321 04:41:24.149772 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/194aa099-ed46-4df0-8be9-01b7e35db642-catalog-content\") pod \"redhat-operators-8pm62\" (UID: \"194aa099-ed46-4df0-8be9-01b7e35db642\") " pod="openshift-marketplace/redhat-operators-8pm62" Mar 21 04:41:24 crc kubenswrapper[4923]: I0321 04:41:24.149820 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/194aa099-ed46-4df0-8be9-01b7e35db642-utilities\") pod \"redhat-operators-8pm62\" (UID: \"194aa099-ed46-4df0-8be9-01b7e35db642\") " pod="openshift-marketplace/redhat-operators-8pm62" Mar 21 04:41:24 crc kubenswrapper[4923]: I0321 04:41:24.250558 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/194aa099-ed46-4df0-8be9-01b7e35db642-catalog-content\") pod \"redhat-operators-8pm62\" (UID: \"194aa099-ed46-4df0-8be9-01b7e35db642\") " pod="openshift-marketplace/redhat-operators-8pm62" Mar 21 04:41:24 crc kubenswrapper[4923]: I0321 04:41:24.250625 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/194aa099-ed46-4df0-8be9-01b7e35db642-utilities\") pod \"redhat-operators-8pm62\" (UID: \"194aa099-ed46-4df0-8be9-01b7e35db642\") " pod="openshift-marketplace/redhat-operators-8pm62" Mar 21 04:41:24 crc kubenswrapper[4923]: I0321 04:41:24.250667 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75cn9\" (UniqueName: \"kubernetes.io/projected/194aa099-ed46-4df0-8be9-01b7e35db642-kube-api-access-75cn9\") pod \"redhat-operators-8pm62\" (UID: \"194aa099-ed46-4df0-8be9-01b7e35db642\") " pod="openshift-marketplace/redhat-operators-8pm62" Mar 21 04:41:24 crc kubenswrapper[4923]: I0321 04:41:24.251240 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/194aa099-ed46-4df0-8be9-01b7e35db642-catalog-content\") pod \"redhat-operators-8pm62\" (UID: \"194aa099-ed46-4df0-8be9-01b7e35db642\") " pod="openshift-marketplace/redhat-operators-8pm62" Mar 21 04:41:24 crc kubenswrapper[4923]: I0321 04:41:24.251433 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/194aa099-ed46-4df0-8be9-01b7e35db642-utilities\") pod \"redhat-operators-8pm62\" (UID: \"194aa099-ed46-4df0-8be9-01b7e35db642\") " pod="openshift-marketplace/redhat-operators-8pm62" Mar 21 04:41:24 crc kubenswrapper[4923]: I0321 04:41:24.282906 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75cn9\" (UniqueName: \"kubernetes.io/projected/194aa099-ed46-4df0-8be9-01b7e35db642-kube-api-access-75cn9\") pod \"redhat-operators-8pm62\" (UID: \"194aa099-ed46-4df0-8be9-01b7e35db642\") " pod="openshift-marketplace/redhat-operators-8pm62" Mar 21 04:41:24 crc kubenswrapper[4923]: I0321 04:41:24.310571 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8pm62" Mar 21 04:41:24 crc kubenswrapper[4923]: I0321 04:41:24.545532 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8pm62"] Mar 21 04:41:24 crc kubenswrapper[4923]: I0321 04:41:24.593129 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pm62" event={"ID":"194aa099-ed46-4df0-8be9-01b7e35db642","Type":"ContainerStarted","Data":"37f940987b44aa6c4d343467deda4d06ea4bb53bae1f2a5d18224e10a92e0045"} Mar 21 04:41:25 crc kubenswrapper[4923]: I0321 04:41:25.602069 4923 generic.go:334] "Generic (PLEG): container finished" podID="194aa099-ed46-4df0-8be9-01b7e35db642" containerID="0f1a234a4a9199894eb07f7e9d51c2a4316934a437ba40c07180e155c8b4467f" exitCode=0 Mar 21 04:41:25 crc kubenswrapper[4923]: I0321 04:41:25.602122 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pm62" event={"ID":"194aa099-ed46-4df0-8be9-01b7e35db642","Type":"ContainerDied","Data":"0f1a234a4a9199894eb07f7e9d51c2a4316934a437ba40c07180e155c8b4467f"} Mar 21 04:41:26 crc kubenswrapper[4923]: I0321 04:41:26.613801 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pm62" event={"ID":"194aa099-ed46-4df0-8be9-01b7e35db642","Type":"ContainerStarted","Data":"e18e85b7fdcde8b4c20851d28c8e005565e57cebbd2e5188b895c2a0e1e27d2b"} Mar 21 04:41:27 crc kubenswrapper[4923]: I0321 04:41:27.622026 4923 generic.go:334] "Generic (PLEG): container finished" podID="194aa099-ed46-4df0-8be9-01b7e35db642" containerID="e18e85b7fdcde8b4c20851d28c8e005565e57cebbd2e5188b895c2a0e1e27d2b" exitCode=0 Mar 21 04:41:27 crc kubenswrapper[4923]: I0321 04:41:27.622061 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pm62" event={"ID":"194aa099-ed46-4df0-8be9-01b7e35db642","Type":"ContainerDied","Data":"e18e85b7fdcde8b4c20851d28c8e005565e57cebbd2e5188b895c2a0e1e27d2b"} Mar 21 04:41:28 crc kubenswrapper[4923]: I0321 04:41:28.642366 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pm62" event={"ID":"194aa099-ed46-4df0-8be9-01b7e35db642","Type":"ContainerStarted","Data":"d46966c10340906b6c42d9305eb4435fe0516dc43b411690364234b8884200e3"} Mar 21 04:41:28 crc kubenswrapper[4923]: I0321 04:41:28.681025 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8pm62" podStartSLOduration=3.23172551 podStartE2EDuration="5.680998133s" podCreationTimestamp="2026-03-21 04:41:23 +0000 UTC" firstStartedPulling="2026-03-21 04:41:25.603873711 +0000 UTC m=+1450.756884808" lastFinishedPulling="2026-03-21 04:41:28.053146294 +0000 UTC m=+1453.206157431" observedRunningTime="2026-03-21 04:41:28.674434575 +0000 UTC m=+1453.827445672" watchObservedRunningTime="2026-03-21 04:41:28.680998133 +0000 UTC m=+1453.834009260" Mar 21 04:41:34 crc kubenswrapper[4923]: I0321 04:41:34.311005 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8pm62" Mar 21 04:41:34 crc kubenswrapper[4923]: I0321 04:41:34.311491 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8pm62" Mar 21 04:41:35 crc kubenswrapper[4923]: I0321 04:41:35.391692 4923 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8pm62" podUID="194aa099-ed46-4df0-8be9-01b7e35db642" containerName="registry-server" probeResult="failure" output=< Mar 21 04:41:35 crc kubenswrapper[4923]: timeout: failed to connect service ":50051" within 1s Mar 21 04:41:35 crc kubenswrapper[4923]: > Mar 21 04:41:37 crc kubenswrapper[4923]: I0321 04:41:37.708273 4923 generic.go:334] "Generic (PLEG): container finished" podID="e1fa0eab-c10b-4c9d-afae-9a19cad9c996" containerID="d4fe6ebe029653d86714e7f4f856b2f4b6c3dd7d6c73d9d2775ee3985bc92c91" exitCode=0 Mar 21 04:41:37 crc kubenswrapper[4923]: I0321 04:41:37.708364 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rqglt/must-gather-hmr8p" event={"ID":"e1fa0eab-c10b-4c9d-afae-9a19cad9c996","Type":"ContainerDied","Data":"d4fe6ebe029653d86714e7f4f856b2f4b6c3dd7d6c73d9d2775ee3985bc92c91"} Mar 21 04:41:37 crc kubenswrapper[4923]: I0321 04:41:37.709488 4923 scope.go:117] "RemoveContainer" containerID="d4fe6ebe029653d86714e7f4f856b2f4b6c3dd7d6c73d9d2775ee3985bc92c91" Mar 21 04:41:38 crc kubenswrapper[4923]: I0321 04:41:38.373182 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rqglt_must-gather-hmr8p_e1fa0eab-c10b-4c9d-afae-9a19cad9c996/gather/0.log" Mar 21 04:41:44 crc kubenswrapper[4923]: I0321 04:41:44.377478 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8pm62" Mar 21 04:41:44 crc kubenswrapper[4923]: I0321 04:41:44.428851 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8pm62" Mar 21 04:41:44 crc kubenswrapper[4923]: I0321 04:41:44.622793 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8pm62"] Mar 21 04:41:45 crc kubenswrapper[4923]: I0321 04:41:45.213865 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rqglt/must-gather-hmr8p"] Mar 21 04:41:45 crc kubenswrapper[4923]: I0321 04:41:45.214194 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-rqglt/must-gather-hmr8p" podUID="e1fa0eab-c10b-4c9d-afae-9a19cad9c996" containerName="copy" containerID="cri-o://510165930c1aea2bc34aad4f36fae0c100158caa6bcbcb3b225ecc8933bc00f2" gracePeriod=2 Mar 21 04:41:45 crc kubenswrapper[4923]: I0321 04:41:45.218067 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rqglt/must-gather-hmr8p"] Mar 21 04:41:45 crc kubenswrapper[4923]: I0321 04:41:45.719872 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rqglt_must-gather-hmr8p_e1fa0eab-c10b-4c9d-afae-9a19cad9c996/copy/0.log" Mar 21 04:41:45 crc kubenswrapper[4923]: I0321 04:41:45.720964 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rqglt/must-gather-hmr8p" Mar 21 04:41:45 crc kubenswrapper[4923]: I0321 04:41:45.770262 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rqglt_must-gather-hmr8p_e1fa0eab-c10b-4c9d-afae-9a19cad9c996/copy/0.log" Mar 21 04:41:45 crc kubenswrapper[4923]: I0321 04:41:45.770763 4923 generic.go:334] "Generic (PLEG): container finished" podID="e1fa0eab-c10b-4c9d-afae-9a19cad9c996" containerID="510165930c1aea2bc34aad4f36fae0c100158caa6bcbcb3b225ecc8933bc00f2" exitCode=143 Mar 21 04:41:45 crc kubenswrapper[4923]: I0321 04:41:45.770840 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rqglt/must-gather-hmr8p" Mar 21 04:41:45 crc kubenswrapper[4923]: I0321 04:41:45.770853 4923 scope.go:117] "RemoveContainer" containerID="510165930c1aea2bc34aad4f36fae0c100158caa6bcbcb3b225ecc8933bc00f2" Mar 21 04:41:45 crc kubenswrapper[4923]: I0321 04:41:45.770949 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8pm62" podUID="194aa099-ed46-4df0-8be9-01b7e35db642" containerName="registry-server" containerID="cri-o://d46966c10340906b6c42d9305eb4435fe0516dc43b411690364234b8884200e3" gracePeriod=2 Mar 21 04:41:45 crc kubenswrapper[4923]: I0321 04:41:45.789294 4923 scope.go:117] "RemoveContainer" containerID="d4fe6ebe029653d86714e7f4f856b2f4b6c3dd7d6c73d9d2775ee3985bc92c91" Mar 21 04:41:45 crc kubenswrapper[4923]: I0321 04:41:45.821239 4923 scope.go:117] "RemoveContainer" containerID="510165930c1aea2bc34aad4f36fae0c100158caa6bcbcb3b225ecc8933bc00f2" Mar 21 04:41:45 crc kubenswrapper[4923]: E0321 04:41:45.821630 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"510165930c1aea2bc34aad4f36fae0c100158caa6bcbcb3b225ecc8933bc00f2\": container with ID starting with 510165930c1aea2bc34aad4f36fae0c100158caa6bcbcb3b225ecc8933bc00f2 not found: ID does not exist" containerID="510165930c1aea2bc34aad4f36fae0c100158caa6bcbcb3b225ecc8933bc00f2" Mar 21 04:41:45 crc kubenswrapper[4923]: I0321 04:41:45.821662 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"510165930c1aea2bc34aad4f36fae0c100158caa6bcbcb3b225ecc8933bc00f2"} err="failed to get container status \"510165930c1aea2bc34aad4f36fae0c100158caa6bcbcb3b225ecc8933bc00f2\": rpc error: code = NotFound desc = could not find container \"510165930c1aea2bc34aad4f36fae0c100158caa6bcbcb3b225ecc8933bc00f2\": container with ID starting with 510165930c1aea2bc34aad4f36fae0c100158caa6bcbcb3b225ecc8933bc00f2 not found: ID does not exist" Mar 21 04:41:45 crc kubenswrapper[4923]: I0321 04:41:45.821684 4923 scope.go:117] "RemoveContainer" containerID="d4fe6ebe029653d86714e7f4f856b2f4b6c3dd7d6c73d9d2775ee3985bc92c91" Mar 21 04:41:45 crc kubenswrapper[4923]: E0321 04:41:45.822006 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4fe6ebe029653d86714e7f4f856b2f4b6c3dd7d6c73d9d2775ee3985bc92c91\": container with ID starting with d4fe6ebe029653d86714e7f4f856b2f4b6c3dd7d6c73d9d2775ee3985bc92c91 not found: ID does not exist" containerID="d4fe6ebe029653d86714e7f4f856b2f4b6c3dd7d6c73d9d2775ee3985bc92c91" Mar 21 04:41:45 crc kubenswrapper[4923]: I0321 04:41:45.822028 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4fe6ebe029653d86714e7f4f856b2f4b6c3dd7d6c73d9d2775ee3985bc92c91"} err="failed to get container status \"d4fe6ebe029653d86714e7f4f856b2f4b6c3dd7d6c73d9d2775ee3985bc92c91\": rpc error: code = NotFound desc = could not find container \"d4fe6ebe029653d86714e7f4f856b2f4b6c3dd7d6c73d9d2775ee3985bc92c91\": container with ID starting with d4fe6ebe029653d86714e7f4f856b2f4b6c3dd7d6c73d9d2775ee3985bc92c91 not found: ID does not exist" Mar 21 04:41:45 crc kubenswrapper[4923]: I0321 04:41:45.876178 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfgj6\" (UniqueName: \"kubernetes.io/projected/e1fa0eab-c10b-4c9d-afae-9a19cad9c996-kube-api-access-hfgj6\") pod \"e1fa0eab-c10b-4c9d-afae-9a19cad9c996\" (UID: \"e1fa0eab-c10b-4c9d-afae-9a19cad9c996\") " Mar 21 04:41:45 crc kubenswrapper[4923]: I0321 04:41:45.876272 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e1fa0eab-c10b-4c9d-afae-9a19cad9c996-must-gather-output\") pod \"e1fa0eab-c10b-4c9d-afae-9a19cad9c996\" (UID: \"e1fa0eab-c10b-4c9d-afae-9a19cad9c996\") " Mar 21 04:41:45 crc kubenswrapper[4923]: I0321 04:41:45.883210 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1fa0eab-c10b-4c9d-afae-9a19cad9c996-kube-api-access-hfgj6" (OuterVolumeSpecName: "kube-api-access-hfgj6") pod "e1fa0eab-c10b-4c9d-afae-9a19cad9c996" (UID: "e1fa0eab-c10b-4c9d-afae-9a19cad9c996"). InnerVolumeSpecName "kube-api-access-hfgj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:41:45 crc kubenswrapper[4923]: I0321 04:41:45.932757 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1fa0eab-c10b-4c9d-afae-9a19cad9c996-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e1fa0eab-c10b-4c9d-afae-9a19cad9c996" (UID: "e1fa0eab-c10b-4c9d-afae-9a19cad9c996"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:41:45 crc kubenswrapper[4923]: I0321 04:41:45.977840 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfgj6\" (UniqueName: \"kubernetes.io/projected/e1fa0eab-c10b-4c9d-afae-9a19cad9c996-kube-api-access-hfgj6\") on node \"crc\" DevicePath \"\"" Mar 21 04:41:45 crc kubenswrapper[4923]: I0321 04:41:45.977869 4923 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e1fa0eab-c10b-4c9d-afae-9a19cad9c996-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 21 04:41:46 crc kubenswrapper[4923]: I0321 04:41:46.367255 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1fa0eab-c10b-4c9d-afae-9a19cad9c996" path="/var/lib/kubelet/pods/e1fa0eab-c10b-4c9d-afae-9a19cad9c996/volumes" Mar 21 04:41:46 crc kubenswrapper[4923]: I0321 04:41:46.783609 4923 generic.go:334] "Generic (PLEG): container finished" podID="194aa099-ed46-4df0-8be9-01b7e35db642" containerID="d46966c10340906b6c42d9305eb4435fe0516dc43b411690364234b8884200e3" exitCode=0 Mar 21 04:41:46 crc kubenswrapper[4923]: I0321 04:41:46.783668 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pm62" event={"ID":"194aa099-ed46-4df0-8be9-01b7e35db642","Type":"ContainerDied","Data":"d46966c10340906b6c42d9305eb4435fe0516dc43b411690364234b8884200e3"} Mar 21 04:41:46 crc kubenswrapper[4923]: I0321 04:41:46.872399 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8pm62" Mar 21 04:41:47 crc kubenswrapper[4923]: I0321 04:41:47.026435 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/194aa099-ed46-4df0-8be9-01b7e35db642-utilities\") pod \"194aa099-ed46-4df0-8be9-01b7e35db642\" (UID: \"194aa099-ed46-4df0-8be9-01b7e35db642\") " Mar 21 04:41:47 crc kubenswrapper[4923]: I0321 04:41:47.026503 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75cn9\" (UniqueName: \"kubernetes.io/projected/194aa099-ed46-4df0-8be9-01b7e35db642-kube-api-access-75cn9\") pod \"194aa099-ed46-4df0-8be9-01b7e35db642\" (UID: \"194aa099-ed46-4df0-8be9-01b7e35db642\") " Mar 21 04:41:47 crc kubenswrapper[4923]: I0321 04:41:47.026620 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/194aa099-ed46-4df0-8be9-01b7e35db642-catalog-content\") pod \"194aa099-ed46-4df0-8be9-01b7e35db642\" (UID: \"194aa099-ed46-4df0-8be9-01b7e35db642\") " Mar 21 04:41:47 crc kubenswrapper[4923]: I0321 04:41:47.027303 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/194aa099-ed46-4df0-8be9-01b7e35db642-utilities" (OuterVolumeSpecName: "utilities") pod "194aa099-ed46-4df0-8be9-01b7e35db642" (UID: "194aa099-ed46-4df0-8be9-01b7e35db642"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:41:47 crc kubenswrapper[4923]: I0321 04:41:47.030465 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/194aa099-ed46-4df0-8be9-01b7e35db642-kube-api-access-75cn9" (OuterVolumeSpecName: "kube-api-access-75cn9") pod "194aa099-ed46-4df0-8be9-01b7e35db642" (UID: "194aa099-ed46-4df0-8be9-01b7e35db642"). InnerVolumeSpecName "kube-api-access-75cn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:41:47 crc kubenswrapper[4923]: I0321 04:41:47.128565 4923 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/194aa099-ed46-4df0-8be9-01b7e35db642-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 04:41:47 crc kubenswrapper[4923]: I0321 04:41:47.128601 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75cn9\" (UniqueName: \"kubernetes.io/projected/194aa099-ed46-4df0-8be9-01b7e35db642-kube-api-access-75cn9\") on node \"crc\" DevicePath \"\"" Mar 21 04:41:47 crc kubenswrapper[4923]: I0321 04:41:47.174513 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/194aa099-ed46-4df0-8be9-01b7e35db642-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "194aa099-ed46-4df0-8be9-01b7e35db642" (UID: "194aa099-ed46-4df0-8be9-01b7e35db642"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:41:47 crc kubenswrapper[4923]: I0321 04:41:47.229931 4923 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/194aa099-ed46-4df0-8be9-01b7e35db642-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 04:41:47 crc kubenswrapper[4923]: I0321 04:41:47.793933 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8pm62" event={"ID":"194aa099-ed46-4df0-8be9-01b7e35db642","Type":"ContainerDied","Data":"37f940987b44aa6c4d343467deda4d06ea4bb53bae1f2a5d18224e10a92e0045"} Mar 21 04:41:47 crc kubenswrapper[4923]: I0321 04:41:47.793987 4923 scope.go:117] "RemoveContainer" containerID="d46966c10340906b6c42d9305eb4435fe0516dc43b411690364234b8884200e3" Mar 21 04:41:47 crc kubenswrapper[4923]: I0321 04:41:47.794041 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8pm62" Mar 21 04:41:47 crc kubenswrapper[4923]: I0321 04:41:47.809803 4923 scope.go:117] "RemoveContainer" containerID="e18e85b7fdcde8b4c20851d28c8e005565e57cebbd2e5188b895c2a0e1e27d2b" Mar 21 04:41:47 crc kubenswrapper[4923]: I0321 04:41:47.824236 4923 scope.go:117] "RemoveContainer" containerID="0f1a234a4a9199894eb07f7e9d51c2a4316934a437ba40c07180e155c8b4467f" Mar 21 04:41:47 crc kubenswrapper[4923]: I0321 04:41:47.865301 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8pm62"] Mar 21 04:41:47 crc kubenswrapper[4923]: I0321 04:41:47.870506 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8pm62"] Mar 21 04:41:48 crc kubenswrapper[4923]: I0321 04:41:48.371420 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="194aa099-ed46-4df0-8be9-01b7e35db642" path="/var/lib/kubelet/pods/194aa099-ed46-4df0-8be9-01b7e35db642/volumes" Mar 21 04:42:00 crc kubenswrapper[4923]: I0321 04:42:00.155510 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567802-9j79r"] Mar 21 04:42:00 crc kubenswrapper[4923]: E0321 04:42:00.156654 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="194aa099-ed46-4df0-8be9-01b7e35db642" containerName="registry-server" Mar 21 04:42:00 crc kubenswrapper[4923]: I0321 04:42:00.156686 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="194aa099-ed46-4df0-8be9-01b7e35db642" containerName="registry-server" Mar 21 04:42:00 crc kubenswrapper[4923]: E0321 04:42:00.156721 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="194aa099-ed46-4df0-8be9-01b7e35db642" containerName="extract-content" Mar 21 04:42:00 crc kubenswrapper[4923]: I0321 04:42:00.156735 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="194aa099-ed46-4df0-8be9-01b7e35db642" containerName="extract-content" Mar 21 04:42:00 crc kubenswrapper[4923]: E0321 04:42:00.156761 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="194aa099-ed46-4df0-8be9-01b7e35db642" containerName="extract-utilities" Mar 21 04:42:00 crc kubenswrapper[4923]: I0321 04:42:00.156776 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="194aa099-ed46-4df0-8be9-01b7e35db642" containerName="extract-utilities" Mar 21 04:42:00 crc kubenswrapper[4923]: E0321 04:42:00.156796 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1fa0eab-c10b-4c9d-afae-9a19cad9c996" containerName="copy" Mar 21 04:42:00 crc kubenswrapper[4923]: I0321 04:42:00.156811 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1fa0eab-c10b-4c9d-afae-9a19cad9c996" containerName="copy" Mar 21 04:42:00 crc kubenswrapper[4923]: E0321 04:42:00.156837 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1fa0eab-c10b-4c9d-afae-9a19cad9c996" containerName="gather" Mar 21 04:42:00 crc kubenswrapper[4923]: I0321 04:42:00.156851 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1fa0eab-c10b-4c9d-afae-9a19cad9c996" containerName="gather" Mar 21 04:42:00 crc kubenswrapper[4923]: I0321 04:42:00.157040 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="194aa099-ed46-4df0-8be9-01b7e35db642" containerName="registry-server" Mar 21 04:42:00 crc kubenswrapper[4923]: I0321 04:42:00.157079 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1fa0eab-c10b-4c9d-afae-9a19cad9c996" containerName="copy" Mar 21 04:42:00 crc kubenswrapper[4923]: I0321 04:42:00.157105 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1fa0eab-c10b-4c9d-afae-9a19cad9c996" containerName="gather" Mar 21 04:42:00 crc kubenswrapper[4923]: I0321 04:42:00.158022 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567802-9j79r" Mar 21 04:42:00 crc kubenswrapper[4923]: I0321 04:42:00.162708 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 04:42:00 crc kubenswrapper[4923]: I0321 04:42:00.163640 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8trfl" Mar 21 04:42:00 crc kubenswrapper[4923]: I0321 04:42:00.164097 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 04:42:00 crc kubenswrapper[4923]: I0321 04:42:00.168773 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567802-9j79r"] Mar 21 04:42:00 crc kubenswrapper[4923]: I0321 04:42:00.313955 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd2p9\" (UniqueName: \"kubernetes.io/projected/f597ce65-244c-4e18-a184-65ddba7e483d-kube-api-access-dd2p9\") pod \"auto-csr-approver-29567802-9j79r\" (UID: \"f597ce65-244c-4e18-a184-65ddba7e483d\") " pod="openshift-infra/auto-csr-approver-29567802-9j79r" Mar 21 04:42:00 crc kubenswrapper[4923]: I0321 04:42:00.416089 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd2p9\" (UniqueName: \"kubernetes.io/projected/f597ce65-244c-4e18-a184-65ddba7e483d-kube-api-access-dd2p9\") pod \"auto-csr-approver-29567802-9j79r\" (UID: \"f597ce65-244c-4e18-a184-65ddba7e483d\") " pod="openshift-infra/auto-csr-approver-29567802-9j79r" Mar 21 04:42:00 crc kubenswrapper[4923]: I0321 04:42:00.444948 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd2p9\" (UniqueName: \"kubernetes.io/projected/f597ce65-244c-4e18-a184-65ddba7e483d-kube-api-access-dd2p9\") pod \"auto-csr-approver-29567802-9j79r\" (UID: \"f597ce65-244c-4e18-a184-65ddba7e483d\") " pod="openshift-infra/auto-csr-approver-29567802-9j79r" Mar 21 04:42:00 crc kubenswrapper[4923]: I0321 04:42:00.488137 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567802-9j79r" Mar 21 04:42:00 crc kubenswrapper[4923]: I0321 04:42:00.740726 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567802-9j79r"] Mar 21 04:42:00 crc kubenswrapper[4923]: I0321 04:42:00.890776 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567802-9j79r" event={"ID":"f597ce65-244c-4e18-a184-65ddba7e483d","Type":"ContainerStarted","Data":"afd4b840230bb961deadb93102def78d394ce2025c8d76ad2e84d8559dac8298"} Mar 21 04:42:02 crc kubenswrapper[4923]: I0321 04:42:02.907806 4923 generic.go:334] "Generic (PLEG): container finished" podID="f597ce65-244c-4e18-a184-65ddba7e483d" containerID="2a38a10b56e9b3ae005e4e75c0d35b99d2554795300e2274c96e6b223e0b0db3" exitCode=0 Mar 21 04:42:02 crc kubenswrapper[4923]: I0321 04:42:02.907888 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567802-9j79r" event={"ID":"f597ce65-244c-4e18-a184-65ddba7e483d","Type":"ContainerDied","Data":"2a38a10b56e9b3ae005e4e75c0d35b99d2554795300e2274c96e6b223e0b0db3"} Mar 21 04:42:03 crc kubenswrapper[4923]: I0321 04:42:03.236149 4923 patch_prober.go:28] interesting pod/machine-config-daemon-cv5gr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 04:42:03 crc kubenswrapper[4923]: I0321 04:42:03.236226 4923 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 04:42:04 crc kubenswrapper[4923]: I0321 04:42:04.207065 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567802-9j79r" Mar 21 04:42:04 crc kubenswrapper[4923]: I0321 04:42:04.271065 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd2p9\" (UniqueName: \"kubernetes.io/projected/f597ce65-244c-4e18-a184-65ddba7e483d-kube-api-access-dd2p9\") pod \"f597ce65-244c-4e18-a184-65ddba7e483d\" (UID: \"f597ce65-244c-4e18-a184-65ddba7e483d\") " Mar 21 04:42:04 crc kubenswrapper[4923]: I0321 04:42:04.280594 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f597ce65-244c-4e18-a184-65ddba7e483d-kube-api-access-dd2p9" (OuterVolumeSpecName: "kube-api-access-dd2p9") pod "f597ce65-244c-4e18-a184-65ddba7e483d" (UID: "f597ce65-244c-4e18-a184-65ddba7e483d"). InnerVolumeSpecName "kube-api-access-dd2p9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:42:04 crc kubenswrapper[4923]: I0321 04:42:04.380216 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd2p9\" (UniqueName: \"kubernetes.io/projected/f597ce65-244c-4e18-a184-65ddba7e483d-kube-api-access-dd2p9\") on node \"crc\" DevicePath \"\"" Mar 21 04:42:04 crc kubenswrapper[4923]: I0321 04:42:04.931199 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567802-9j79r" event={"ID":"f597ce65-244c-4e18-a184-65ddba7e483d","Type":"ContainerDied","Data":"afd4b840230bb961deadb93102def78d394ce2025c8d76ad2e84d8559dac8298"} Mar 21 04:42:04 crc kubenswrapper[4923]: I0321 04:42:04.931254 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567802-9j79r" Mar 21 04:42:04 crc kubenswrapper[4923]: I0321 04:42:04.931271 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afd4b840230bb961deadb93102def78d394ce2025c8d76ad2e84d8559dac8298" Mar 21 04:42:05 crc kubenswrapper[4923]: I0321 04:42:05.289216 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567796-xlf62"] Mar 21 04:42:05 crc kubenswrapper[4923]: I0321 04:42:05.298218 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567796-xlf62"] Mar 21 04:42:06 crc kubenswrapper[4923]: I0321 04:42:06.373130 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b637d963-84a9-4fee-b457-b4be405728c3" path="/var/lib/kubelet/pods/b637d963-84a9-4fee-b457-b4be405728c3/volumes" Mar 21 04:42:12 crc kubenswrapper[4923]: I0321 04:42:12.406313 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w9slf"] Mar 21 04:42:12 crc kubenswrapper[4923]: E0321 04:42:12.407102 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f597ce65-244c-4e18-a184-65ddba7e483d" containerName="oc" Mar 21 04:42:12 crc kubenswrapper[4923]: I0321 04:42:12.407123 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="f597ce65-244c-4e18-a184-65ddba7e483d" containerName="oc" Mar 21 04:42:12 crc kubenswrapper[4923]: I0321 04:42:12.407355 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="f597ce65-244c-4e18-a184-65ddba7e483d" containerName="oc" Mar 21 04:42:12 crc kubenswrapper[4923]: I0321 04:42:12.408771 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w9slf" Mar 21 04:42:12 crc kubenswrapper[4923]: I0321 04:42:12.429581 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9slf"] Mar 21 04:42:12 crc kubenswrapper[4923]: I0321 04:42:12.497619 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1d905fd-34e6-4a13-a74e-b67db9652481-utilities\") pod \"redhat-marketplace-w9slf\" (UID: \"b1d905fd-34e6-4a13-a74e-b67db9652481\") " pod="openshift-marketplace/redhat-marketplace-w9slf" Mar 21 04:42:12 crc kubenswrapper[4923]: I0321 04:42:12.498001 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8tpj\" (UniqueName: \"kubernetes.io/projected/b1d905fd-34e6-4a13-a74e-b67db9652481-kube-api-access-b8tpj\") pod \"redhat-marketplace-w9slf\" (UID: \"b1d905fd-34e6-4a13-a74e-b67db9652481\") " pod="openshift-marketplace/redhat-marketplace-w9slf" Mar 21 04:42:12 crc kubenswrapper[4923]: I0321 04:42:12.498293 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1d905fd-34e6-4a13-a74e-b67db9652481-catalog-content\") pod \"redhat-marketplace-w9slf\" (UID: \"b1d905fd-34e6-4a13-a74e-b67db9652481\") " pod="openshift-marketplace/redhat-marketplace-w9slf" Mar 21 04:42:12 crc kubenswrapper[4923]: I0321 04:42:12.599700 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1d905fd-34e6-4a13-a74e-b67db9652481-catalog-content\") pod \"redhat-marketplace-w9slf\" (UID: \"b1d905fd-34e6-4a13-a74e-b67db9652481\") " pod="openshift-marketplace/redhat-marketplace-w9slf" Mar 21 04:42:12 crc kubenswrapper[4923]: I0321 04:42:12.600295 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1d905fd-34e6-4a13-a74e-b67db9652481-utilities\") pod \"redhat-marketplace-w9slf\" (UID: \"b1d905fd-34e6-4a13-a74e-b67db9652481\") " pod="openshift-marketplace/redhat-marketplace-w9slf" Mar 21 04:42:12 crc kubenswrapper[4923]: I0321 04:42:12.600581 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1d905fd-34e6-4a13-a74e-b67db9652481-catalog-content\") pod \"redhat-marketplace-w9slf\" (UID: \"b1d905fd-34e6-4a13-a74e-b67db9652481\") " pod="openshift-marketplace/redhat-marketplace-w9slf" Mar 21 04:42:12 crc kubenswrapper[4923]: I0321 04:42:12.600632 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8tpj\" (UniqueName: \"kubernetes.io/projected/b1d905fd-34e6-4a13-a74e-b67db9652481-kube-api-access-b8tpj\") pod \"redhat-marketplace-w9slf\" (UID: \"b1d905fd-34e6-4a13-a74e-b67db9652481\") " pod="openshift-marketplace/redhat-marketplace-w9slf" Mar 21 04:42:12 crc kubenswrapper[4923]: I0321 04:42:12.601111 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1d905fd-34e6-4a13-a74e-b67db9652481-utilities\") pod \"redhat-marketplace-w9slf\" (UID: \"b1d905fd-34e6-4a13-a74e-b67db9652481\") " pod="openshift-marketplace/redhat-marketplace-w9slf" Mar 21 04:42:12 crc kubenswrapper[4923]: I0321 04:42:12.637227 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8tpj\" (UniqueName: \"kubernetes.io/projected/b1d905fd-34e6-4a13-a74e-b67db9652481-kube-api-access-b8tpj\") pod \"redhat-marketplace-w9slf\" (UID: \"b1d905fd-34e6-4a13-a74e-b67db9652481\") " pod="openshift-marketplace/redhat-marketplace-w9slf" Mar 21 04:42:12 crc kubenswrapper[4923]: I0321 04:42:12.748711 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w9slf" Mar 21 04:42:13 crc kubenswrapper[4923]: I0321 04:42:13.040339 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9slf"] Mar 21 04:42:14 crc kubenswrapper[4923]: I0321 04:42:14.006254 4923 generic.go:334] "Generic (PLEG): container finished" podID="b1d905fd-34e6-4a13-a74e-b67db9652481" containerID="36a7df38fc93f0aa457485f288f685cdc25c1607d75cb65016d22816904f14dc" exitCode=0 Mar 21 04:42:14 crc kubenswrapper[4923]: I0321 04:42:14.006371 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9slf" event={"ID":"b1d905fd-34e6-4a13-a74e-b67db9652481","Type":"ContainerDied","Data":"36a7df38fc93f0aa457485f288f685cdc25c1607d75cb65016d22816904f14dc"} Mar 21 04:42:14 crc kubenswrapper[4923]: I0321 04:42:14.006633 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9slf" event={"ID":"b1d905fd-34e6-4a13-a74e-b67db9652481","Type":"ContainerStarted","Data":"c28e1a8dc4b07f944b5d0d2e148673e10c90472cf7a7136787e1eb3a7e1ecc29"} Mar 21 04:42:15 crc kubenswrapper[4923]: I0321 04:42:15.016857 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9slf" event={"ID":"b1d905fd-34e6-4a13-a74e-b67db9652481","Type":"ContainerStarted","Data":"1e559cfbcb4bcec24d4d5a8520d5b4d34614a552fe8449c0831095bdbd261bd8"} Mar 21 04:42:16 crc kubenswrapper[4923]: I0321 04:42:16.026923 4923 generic.go:334] "Generic (PLEG): container finished" podID="b1d905fd-34e6-4a13-a74e-b67db9652481" containerID="1e559cfbcb4bcec24d4d5a8520d5b4d34614a552fe8449c0831095bdbd261bd8" exitCode=0 Mar 21 04:42:16 crc kubenswrapper[4923]: I0321 04:42:16.027006 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9slf" event={"ID":"b1d905fd-34e6-4a13-a74e-b67db9652481","Type":"ContainerDied","Data":"1e559cfbcb4bcec24d4d5a8520d5b4d34614a552fe8449c0831095bdbd261bd8"} Mar 21 04:42:17 crc kubenswrapper[4923]: I0321 04:42:17.038203 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9slf" event={"ID":"b1d905fd-34e6-4a13-a74e-b67db9652481","Type":"ContainerStarted","Data":"0fce409576fb18671d9407205e17f5c3202e306a3c9b76caf689ce80b43001e0"} Mar 21 04:42:17 crc kubenswrapper[4923]: I0321 04:42:17.067099 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w9slf" podStartSLOduration=2.650777413 podStartE2EDuration="5.06707683s" podCreationTimestamp="2026-03-21 04:42:12 +0000 UTC" firstStartedPulling="2026-03-21 04:42:14.009242466 +0000 UTC m=+1499.162253583" lastFinishedPulling="2026-03-21 04:42:16.425541903 +0000 UTC m=+1501.578553000" observedRunningTime="2026-03-21 04:42:17.065254229 +0000 UTC m=+1502.218265326" watchObservedRunningTime="2026-03-21 04:42:17.06707683 +0000 UTC m=+1502.220087927" Mar 21 04:42:18 crc kubenswrapper[4923]: I0321 04:42:18.752109 4923 scope.go:117] "RemoveContainer" containerID="fc89be06fb80c29622892c0d9a3553de526cba4252622588f3ff8c30d0cba281" Mar 21 04:42:18 crc kubenswrapper[4923]: I0321 04:42:18.773704 4923 scope.go:117] "RemoveContainer" containerID="b69584deb1c4b452b1bde1c2430749956ff729d838c27506a89d489f02d94dd9" Mar 21 04:42:18 crc kubenswrapper[4923]: I0321 04:42:18.829729 4923 scope.go:117] "RemoveContainer" containerID="1d7bdb7189eaefd029af16537a610ffebb2e15977a2937064abc923ac43bba9e" Mar 21 04:42:22 crc kubenswrapper[4923]: I0321 04:42:22.749591 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w9slf" Mar 21 04:42:22 crc kubenswrapper[4923]: I0321 04:42:22.750028 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w9slf" Mar 21 04:42:22 crc kubenswrapper[4923]: I0321 04:42:22.823208 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w9slf" Mar 21 04:42:23 crc kubenswrapper[4923]: I0321 04:42:23.147434 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w9slf" Mar 21 04:42:23 crc kubenswrapper[4923]: I0321 04:42:23.221995 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9slf"] Mar 21 04:42:25 crc kubenswrapper[4923]: I0321 04:42:25.108063 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w9slf" podUID="b1d905fd-34e6-4a13-a74e-b67db9652481" containerName="registry-server" containerID="cri-o://0fce409576fb18671d9407205e17f5c3202e306a3c9b76caf689ce80b43001e0" gracePeriod=2 Mar 21 04:42:25 crc kubenswrapper[4923]: I0321 04:42:25.540294 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w9slf" Mar 21 04:42:25 crc kubenswrapper[4923]: I0321 04:42:25.579553 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1d905fd-34e6-4a13-a74e-b67db9652481-utilities\") pod \"b1d905fd-34e6-4a13-a74e-b67db9652481\" (UID: \"b1d905fd-34e6-4a13-a74e-b67db9652481\") " Mar 21 04:42:25 crc kubenswrapper[4923]: I0321 04:42:25.579592 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8tpj\" (UniqueName: \"kubernetes.io/projected/b1d905fd-34e6-4a13-a74e-b67db9652481-kube-api-access-b8tpj\") pod \"b1d905fd-34e6-4a13-a74e-b67db9652481\" (UID: \"b1d905fd-34e6-4a13-a74e-b67db9652481\") " Mar 21 04:42:25 crc kubenswrapper[4923]: I0321 04:42:25.579653 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1d905fd-34e6-4a13-a74e-b67db9652481-catalog-content\") pod \"b1d905fd-34e6-4a13-a74e-b67db9652481\" (UID: \"b1d905fd-34e6-4a13-a74e-b67db9652481\") " Mar 21 04:42:25 crc kubenswrapper[4923]: I0321 04:42:25.580938 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1d905fd-34e6-4a13-a74e-b67db9652481-utilities" (OuterVolumeSpecName: "utilities") pod "b1d905fd-34e6-4a13-a74e-b67db9652481" (UID: "b1d905fd-34e6-4a13-a74e-b67db9652481"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:42:25 crc kubenswrapper[4923]: I0321 04:42:25.592536 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1d905fd-34e6-4a13-a74e-b67db9652481-kube-api-access-b8tpj" (OuterVolumeSpecName: "kube-api-access-b8tpj") pod "b1d905fd-34e6-4a13-a74e-b67db9652481" (UID: "b1d905fd-34e6-4a13-a74e-b67db9652481"). InnerVolumeSpecName "kube-api-access-b8tpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:42:25 crc kubenswrapper[4923]: I0321 04:42:25.613354 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1d905fd-34e6-4a13-a74e-b67db9652481-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1d905fd-34e6-4a13-a74e-b67db9652481" (UID: "b1d905fd-34e6-4a13-a74e-b67db9652481"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:42:25 crc kubenswrapper[4923]: I0321 04:42:25.680766 4923 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1d905fd-34e6-4a13-a74e-b67db9652481-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 04:42:25 crc kubenswrapper[4923]: I0321 04:42:25.680796 4923 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1d905fd-34e6-4a13-a74e-b67db9652481-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 04:42:25 crc kubenswrapper[4923]: I0321 04:42:25.680807 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8tpj\" (UniqueName: \"kubernetes.io/projected/b1d905fd-34e6-4a13-a74e-b67db9652481-kube-api-access-b8tpj\") on node \"crc\" DevicePath \"\"" Mar 21 04:42:26 crc kubenswrapper[4923]: I0321 04:42:26.115387 4923 generic.go:334] "Generic (PLEG): container finished" podID="b1d905fd-34e6-4a13-a74e-b67db9652481" containerID="0fce409576fb18671d9407205e17f5c3202e306a3c9b76caf689ce80b43001e0" exitCode=0 Mar 21 04:42:26 crc kubenswrapper[4923]: I0321 04:42:26.115655 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9slf" event={"ID":"b1d905fd-34e6-4a13-a74e-b67db9652481","Type":"ContainerDied","Data":"0fce409576fb18671d9407205e17f5c3202e306a3c9b76caf689ce80b43001e0"} Mar 21 04:42:26 crc kubenswrapper[4923]: I0321 04:42:26.115682 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9slf" event={"ID":"b1d905fd-34e6-4a13-a74e-b67db9652481","Type":"ContainerDied","Data":"c28e1a8dc4b07f944b5d0d2e148673e10c90472cf7a7136787e1eb3a7e1ecc29"} Mar 21 04:42:26 crc kubenswrapper[4923]: I0321 04:42:26.115699 4923 scope.go:117] "RemoveContainer" containerID="0fce409576fb18671d9407205e17f5c3202e306a3c9b76caf689ce80b43001e0" Mar 21 04:42:26 crc kubenswrapper[4923]: I0321 04:42:26.115813 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w9slf" Mar 21 04:42:26 crc kubenswrapper[4923]: I0321 04:42:26.135579 4923 scope.go:117] "RemoveContainer" containerID="1e559cfbcb4bcec24d4d5a8520d5b4d34614a552fe8449c0831095bdbd261bd8" Mar 21 04:42:26 crc kubenswrapper[4923]: I0321 04:42:26.151611 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9slf"] Mar 21 04:42:26 crc kubenswrapper[4923]: I0321 04:42:26.157362 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9slf"] Mar 21 04:42:26 crc kubenswrapper[4923]: I0321 04:42:26.158854 4923 scope.go:117] "RemoveContainer" containerID="36a7df38fc93f0aa457485f288f685cdc25c1607d75cb65016d22816904f14dc" Mar 21 04:42:26 crc kubenswrapper[4923]: I0321 04:42:26.178965 4923 scope.go:117] "RemoveContainer" containerID="0fce409576fb18671d9407205e17f5c3202e306a3c9b76caf689ce80b43001e0" Mar 21 04:42:26 crc kubenswrapper[4923]: E0321 04:42:26.179465 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fce409576fb18671d9407205e17f5c3202e306a3c9b76caf689ce80b43001e0\": container with ID starting with 0fce409576fb18671d9407205e17f5c3202e306a3c9b76caf689ce80b43001e0 not found: ID does not exist" containerID="0fce409576fb18671d9407205e17f5c3202e306a3c9b76caf689ce80b43001e0" Mar 21 04:42:26 crc kubenswrapper[4923]: I0321 04:42:26.179514 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fce409576fb18671d9407205e17f5c3202e306a3c9b76caf689ce80b43001e0"} err="failed to get container status \"0fce409576fb18671d9407205e17f5c3202e306a3c9b76caf689ce80b43001e0\": rpc error: code = NotFound desc = could not find container \"0fce409576fb18671d9407205e17f5c3202e306a3c9b76caf689ce80b43001e0\": container with ID starting with 0fce409576fb18671d9407205e17f5c3202e306a3c9b76caf689ce80b43001e0 not found: ID does not exist" Mar 21 04:42:26 crc kubenswrapper[4923]: I0321 04:42:26.179548 4923 scope.go:117] "RemoveContainer" containerID="1e559cfbcb4bcec24d4d5a8520d5b4d34614a552fe8449c0831095bdbd261bd8" Mar 21 04:42:26 crc kubenswrapper[4923]: E0321 04:42:26.179874 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e559cfbcb4bcec24d4d5a8520d5b4d34614a552fe8449c0831095bdbd261bd8\": container with ID starting with 1e559cfbcb4bcec24d4d5a8520d5b4d34614a552fe8449c0831095bdbd261bd8 not found: ID does not exist" containerID="1e559cfbcb4bcec24d4d5a8520d5b4d34614a552fe8449c0831095bdbd261bd8" Mar 21 04:42:26 crc kubenswrapper[4923]: I0321 04:42:26.179977 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e559cfbcb4bcec24d4d5a8520d5b4d34614a552fe8449c0831095bdbd261bd8"} err="failed to get container status \"1e559cfbcb4bcec24d4d5a8520d5b4d34614a552fe8449c0831095bdbd261bd8\": rpc error: code = NotFound desc = could not find container \"1e559cfbcb4bcec24d4d5a8520d5b4d34614a552fe8449c0831095bdbd261bd8\": container with ID starting with 1e559cfbcb4bcec24d4d5a8520d5b4d34614a552fe8449c0831095bdbd261bd8 not found: ID does not exist" Mar 21 04:42:26 crc kubenswrapper[4923]: I0321 04:42:26.180049 4923 scope.go:117] "RemoveContainer" containerID="36a7df38fc93f0aa457485f288f685cdc25c1607d75cb65016d22816904f14dc" Mar 21 04:42:26 crc kubenswrapper[4923]: E0321 04:42:26.180391 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36a7df38fc93f0aa457485f288f685cdc25c1607d75cb65016d22816904f14dc\": container with ID starting with 36a7df38fc93f0aa457485f288f685cdc25c1607d75cb65016d22816904f14dc not found: ID does not exist" containerID="36a7df38fc93f0aa457485f288f685cdc25c1607d75cb65016d22816904f14dc" Mar 21 04:42:26 crc kubenswrapper[4923]: I0321 04:42:26.180425 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36a7df38fc93f0aa457485f288f685cdc25c1607d75cb65016d22816904f14dc"} err="failed to get container status \"36a7df38fc93f0aa457485f288f685cdc25c1607d75cb65016d22816904f14dc\": rpc error: code = NotFound desc = could not find container \"36a7df38fc93f0aa457485f288f685cdc25c1607d75cb65016d22816904f14dc\": container with ID starting with 36a7df38fc93f0aa457485f288f685cdc25c1607d75cb65016d22816904f14dc not found: ID does not exist" Mar 21 04:42:26 crc kubenswrapper[4923]: I0321 04:42:26.366936 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1d905fd-34e6-4a13-a74e-b67db9652481" path="/var/lib/kubelet/pods/b1d905fd-34e6-4a13-a74e-b67db9652481/volumes" Mar 21 04:42:33 crc kubenswrapper[4923]: I0321 04:42:33.236161 4923 patch_prober.go:28] interesting pod/machine-config-daemon-cv5gr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 04:42:33 crc kubenswrapper[4923]: I0321 04:42:33.236914 4923 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 04:43:03 crc kubenswrapper[4923]: I0321 04:43:03.236360 4923 patch_prober.go:28] interesting pod/machine-config-daemon-cv5gr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 04:43:03 crc kubenswrapper[4923]: I0321 04:43:03.237075 4923 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 04:43:03 crc kubenswrapper[4923]: I0321 04:43:03.237139 4923 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" Mar 21 04:43:03 crc kubenswrapper[4923]: I0321 04:43:03.237930 4923 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04"} pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 21 04:43:03 crc kubenswrapper[4923]: I0321 04:43:03.238029 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" containerID="cri-o://1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04" gracePeriod=600 Mar 21 04:43:03 crc kubenswrapper[4923]: E0321 04:43:03.372352 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cv5gr_openshift-machine-config-operator(34cdf206-b121-415c-ae40-21245192e724)\"" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" Mar 21 04:43:03 crc kubenswrapper[4923]: I0321 04:43:03.381042 4923 generic.go:334] "Generic (PLEG): container finished" podID="34cdf206-b121-415c-ae40-21245192e724" containerID="1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04" exitCode=0 Mar 21 04:43:03 crc kubenswrapper[4923]: I0321 04:43:03.381145 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" event={"ID":"34cdf206-b121-415c-ae40-21245192e724","Type":"ContainerDied","Data":"1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04"} Mar 21 04:43:03 crc kubenswrapper[4923]: I0321 04:43:03.381238 4923 scope.go:117] "RemoveContainer" containerID="fe257518d39f35ad4d416456034fb8a1f94f5ac481d8616c2afbf6e356e0b8a6" Mar 21 04:43:03 crc kubenswrapper[4923]: I0321 04:43:03.382089 4923 scope.go:117] "RemoveContainer" containerID="1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04" Mar 21 04:43:03 crc kubenswrapper[4923]: E0321 04:43:03.383270 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cv5gr_openshift-machine-config-operator(34cdf206-b121-415c-ae40-21245192e724)\"" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" Mar 21 04:43:16 crc kubenswrapper[4923]: I0321 04:43:16.369953 4923 scope.go:117] "RemoveContainer" containerID="1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04" Mar 21 04:43:16 crc kubenswrapper[4923]: E0321 04:43:16.371033 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cv5gr_openshift-machine-config-operator(34cdf206-b121-415c-ae40-21245192e724)\"" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" Mar 21 04:43:27 crc kubenswrapper[4923]: I0321 04:43:27.358561 4923 scope.go:117] "RemoveContainer" containerID="1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04" Mar 21 04:43:27 crc kubenswrapper[4923]: E0321 04:43:27.359494 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cv5gr_openshift-machine-config-operator(34cdf206-b121-415c-ae40-21245192e724)\"" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" Mar 21 04:43:38 crc kubenswrapper[4923]: I0321 04:43:38.367644 4923 scope.go:117] "RemoveContainer" containerID="1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04" Mar 21 04:43:38 crc kubenswrapper[4923]: E0321 04:43:38.368808 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cv5gr_openshift-machine-config-operator(34cdf206-b121-415c-ae40-21245192e724)\"" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" Mar 21 04:43:50 crc kubenswrapper[4923]: I0321 04:43:50.359004 4923 scope.go:117] "RemoveContainer" containerID="1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04" Mar 21 04:43:50 crc kubenswrapper[4923]: E0321 04:43:50.359763 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cv5gr_openshift-machine-config-operator(34cdf206-b121-415c-ae40-21245192e724)\"" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" Mar 21 04:44:00 crc kubenswrapper[4923]: I0321 04:44:00.140340 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567804-9mhfl"] Mar 21 04:44:00 crc kubenswrapper[4923]: E0321 04:44:00.140839 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1d905fd-34e6-4a13-a74e-b67db9652481" containerName="extract-content" Mar 21 04:44:00 crc kubenswrapper[4923]: I0321 04:44:00.140850 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1d905fd-34e6-4a13-a74e-b67db9652481" containerName="extract-content" Mar 21 04:44:00 crc kubenswrapper[4923]: E0321 04:44:00.140870 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1d905fd-34e6-4a13-a74e-b67db9652481" containerName="registry-server" Mar 21 04:44:00 crc kubenswrapper[4923]: I0321 04:44:00.140876 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1d905fd-34e6-4a13-a74e-b67db9652481" containerName="registry-server" Mar 21 04:44:00 crc kubenswrapper[4923]: E0321 04:44:00.140890 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1d905fd-34e6-4a13-a74e-b67db9652481" containerName="extract-utilities" Mar 21 04:44:00 crc kubenswrapper[4923]: I0321 04:44:00.140897 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1d905fd-34e6-4a13-a74e-b67db9652481" containerName="extract-utilities" Mar 21 04:44:00 crc kubenswrapper[4923]: I0321 04:44:00.140994 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1d905fd-34e6-4a13-a74e-b67db9652481" containerName="registry-server" Mar 21 04:44:00 crc kubenswrapper[4923]: I0321 04:44:00.141410 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567804-9mhfl" Mar 21 04:44:00 crc kubenswrapper[4923]: I0321 04:44:00.143148 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8trfl" Mar 21 04:44:00 crc kubenswrapper[4923]: I0321 04:44:00.143478 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 04:44:00 crc kubenswrapper[4923]: I0321 04:44:00.147451 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 04:44:00 crc kubenswrapper[4923]: I0321 04:44:00.152744 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567804-9mhfl"] Mar 21 04:44:00 crc kubenswrapper[4923]: I0321 04:44:00.284709 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lb75\" (UniqueName: \"kubernetes.io/projected/4324e451-0ba5-4627-b39e-598291c01252-kube-api-access-2lb75\") pod \"auto-csr-approver-29567804-9mhfl\" (UID: \"4324e451-0ba5-4627-b39e-598291c01252\") " pod="openshift-infra/auto-csr-approver-29567804-9mhfl" Mar 21 04:44:00 crc kubenswrapper[4923]: I0321 04:44:00.385998 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lb75\" (UniqueName: \"kubernetes.io/projected/4324e451-0ba5-4627-b39e-598291c01252-kube-api-access-2lb75\") pod \"auto-csr-approver-29567804-9mhfl\" (UID: \"4324e451-0ba5-4627-b39e-598291c01252\") " pod="openshift-infra/auto-csr-approver-29567804-9mhfl" Mar 21 04:44:00 crc kubenswrapper[4923]: I0321 04:44:00.411585 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lb75\" (UniqueName: \"kubernetes.io/projected/4324e451-0ba5-4627-b39e-598291c01252-kube-api-access-2lb75\") pod \"auto-csr-approver-29567804-9mhfl\" (UID: \"4324e451-0ba5-4627-b39e-598291c01252\") " pod="openshift-infra/auto-csr-approver-29567804-9mhfl" Mar 21 04:44:00 crc kubenswrapper[4923]: I0321 04:44:00.458380 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567804-9mhfl" Mar 21 04:44:00 crc kubenswrapper[4923]: I0321 04:44:00.888235 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567804-9mhfl"] Mar 21 04:44:00 crc kubenswrapper[4923]: I0321 04:44:00.903434 4923 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 21 04:44:01 crc kubenswrapper[4923]: I0321 04:44:01.804729 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567804-9mhfl" event={"ID":"4324e451-0ba5-4627-b39e-598291c01252","Type":"ContainerStarted","Data":"54eeb3cc84d54b1b178200123cdc1c93bd8b012589aa422670f4fb6a9b99cd38"} Mar 21 04:44:02 crc kubenswrapper[4923]: I0321 04:44:02.816715 4923 generic.go:334] "Generic (PLEG): container finished" podID="4324e451-0ba5-4627-b39e-598291c01252" containerID="147840b5a4978c8954bf46a9367f86305bd3a9746ee810fca755fe8ba6c89e4f" exitCode=0 Mar 21 04:44:02 crc kubenswrapper[4923]: I0321 04:44:02.816888 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567804-9mhfl" event={"ID":"4324e451-0ba5-4627-b39e-598291c01252","Type":"ContainerDied","Data":"147840b5a4978c8954bf46a9367f86305bd3a9746ee810fca755fe8ba6c89e4f"} Mar 21 04:44:05 crc kubenswrapper[4923]: I0321 04:44:05.358574 4923 scope.go:117] "RemoveContainer" containerID="1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04" Mar 21 04:44:05 crc kubenswrapper[4923]: E0321 04:44:05.358841 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cv5gr_openshift-machine-config-operator(34cdf206-b121-415c-ae40-21245192e724)\"" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" Mar 21 04:44:05 crc kubenswrapper[4923]: I0321 04:44:05.453544 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567804-9mhfl" Mar 21 04:44:05 crc kubenswrapper[4923]: I0321 04:44:05.653264 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lb75\" (UniqueName: \"kubernetes.io/projected/4324e451-0ba5-4627-b39e-598291c01252-kube-api-access-2lb75\") pod \"4324e451-0ba5-4627-b39e-598291c01252\" (UID: \"4324e451-0ba5-4627-b39e-598291c01252\") " Mar 21 04:44:05 crc kubenswrapper[4923]: I0321 04:44:05.659417 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4324e451-0ba5-4627-b39e-598291c01252-kube-api-access-2lb75" (OuterVolumeSpecName: "kube-api-access-2lb75") pod "4324e451-0ba5-4627-b39e-598291c01252" (UID: "4324e451-0ba5-4627-b39e-598291c01252"). InnerVolumeSpecName "kube-api-access-2lb75". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:44:05 crc kubenswrapper[4923]: I0321 04:44:05.754503 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lb75\" (UniqueName: \"kubernetes.io/projected/4324e451-0ba5-4627-b39e-598291c01252-kube-api-access-2lb75\") on node \"crc\" DevicePath \"\"" Mar 21 04:44:05 crc kubenswrapper[4923]: I0321 04:44:05.842878 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567804-9mhfl" event={"ID":"4324e451-0ba5-4627-b39e-598291c01252","Type":"ContainerDied","Data":"54eeb3cc84d54b1b178200123cdc1c93bd8b012589aa422670f4fb6a9b99cd38"} Mar 21 04:44:05 crc kubenswrapper[4923]: I0321 04:44:05.842915 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54eeb3cc84d54b1b178200123cdc1c93bd8b012589aa422670f4fb6a9b99cd38" Mar 21 04:44:05 crc kubenswrapper[4923]: I0321 04:44:05.842984 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567804-9mhfl" Mar 21 04:44:06 crc kubenswrapper[4923]: I0321 04:44:06.534027 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567798-ddkkh"] Mar 21 04:44:06 crc kubenswrapper[4923]: I0321 04:44:06.541265 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567798-ddkkh"] Mar 21 04:44:08 crc kubenswrapper[4923]: I0321 04:44:08.370561 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3134e8ba-8705-4780-aa7f-5a644e3949ee" path="/var/lib/kubelet/pods/3134e8ba-8705-4780-aa7f-5a644e3949ee/volumes" Mar 21 04:44:17 crc kubenswrapper[4923]: I0321 04:44:17.358921 4923 scope.go:117] "RemoveContainer" containerID="1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04" Mar 21 04:44:17 crc kubenswrapper[4923]: E0321 04:44:17.360068 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cv5gr_openshift-machine-config-operator(34cdf206-b121-415c-ae40-21245192e724)\"" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" Mar 21 04:44:18 crc kubenswrapper[4923]: I0321 04:44:18.953730 4923 scope.go:117] "RemoveContainer" containerID="ff8855d19fbe371a25615edf487270a2b9271ec581f85e810396c50aea439a38" Mar 21 04:44:22 crc kubenswrapper[4923]: I0321 04:44:22.809249 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2f47q/must-gather-45b6c"] Mar 21 04:44:22 crc kubenswrapper[4923]: E0321 04:44:22.811515 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4324e451-0ba5-4627-b39e-598291c01252" containerName="oc" Mar 21 04:44:22 crc kubenswrapper[4923]: I0321 04:44:22.811655 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="4324e451-0ba5-4627-b39e-598291c01252" containerName="oc" Mar 21 04:44:22 crc kubenswrapper[4923]: I0321 04:44:22.811947 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="4324e451-0ba5-4627-b39e-598291c01252" containerName="oc" Mar 21 04:44:22 crc kubenswrapper[4923]: I0321 04:44:22.812991 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2f47q/must-gather-45b6c" Mar 21 04:44:22 crc kubenswrapper[4923]: I0321 04:44:22.815220 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2f47q"/"openshift-service-ca.crt" Mar 21 04:44:22 crc kubenswrapper[4923]: I0321 04:44:22.816469 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2f47q"/"kube-root-ca.crt" Mar 21 04:44:22 crc kubenswrapper[4923]: I0321 04:44:22.825272 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2f47q/must-gather-45b6c"] Mar 21 04:44:22 crc kubenswrapper[4923]: I0321 04:44:22.844807 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq487\" (UniqueName: \"kubernetes.io/projected/7278e08b-b344-488f-9a42-25058a285212-kube-api-access-xq487\") pod \"must-gather-45b6c\" (UID: \"7278e08b-b344-488f-9a42-25058a285212\") " pod="openshift-must-gather-2f47q/must-gather-45b6c" Mar 21 04:44:22 crc kubenswrapper[4923]: I0321 04:44:22.844875 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7278e08b-b344-488f-9a42-25058a285212-must-gather-output\") pod \"must-gather-45b6c\" (UID: \"7278e08b-b344-488f-9a42-25058a285212\") " pod="openshift-must-gather-2f47q/must-gather-45b6c" Mar 21 04:44:22 crc kubenswrapper[4923]: I0321 04:44:22.946225 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq487\" (UniqueName: \"kubernetes.io/projected/7278e08b-b344-488f-9a42-25058a285212-kube-api-access-xq487\") pod \"must-gather-45b6c\" (UID: \"7278e08b-b344-488f-9a42-25058a285212\") " pod="openshift-must-gather-2f47q/must-gather-45b6c" Mar 21 04:44:22 crc kubenswrapper[4923]: I0321 04:44:22.946285 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7278e08b-b344-488f-9a42-25058a285212-must-gather-output\") pod \"must-gather-45b6c\" (UID: \"7278e08b-b344-488f-9a42-25058a285212\") " pod="openshift-must-gather-2f47q/must-gather-45b6c" Mar 21 04:44:22 crc kubenswrapper[4923]: I0321 04:44:22.946755 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7278e08b-b344-488f-9a42-25058a285212-must-gather-output\") pod \"must-gather-45b6c\" (UID: \"7278e08b-b344-488f-9a42-25058a285212\") " pod="openshift-must-gather-2f47q/must-gather-45b6c" Mar 21 04:44:22 crc kubenswrapper[4923]: I0321 04:44:22.967170 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq487\" (UniqueName: \"kubernetes.io/projected/7278e08b-b344-488f-9a42-25058a285212-kube-api-access-xq487\") pod \"must-gather-45b6c\" (UID: \"7278e08b-b344-488f-9a42-25058a285212\") " pod="openshift-must-gather-2f47q/must-gather-45b6c" Mar 21 04:44:23 crc kubenswrapper[4923]: I0321 04:44:23.136460 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2f47q/must-gather-45b6c" Mar 21 04:44:23 crc kubenswrapper[4923]: I0321 04:44:23.385307 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2f47q/must-gather-45b6c"] Mar 21 04:44:24 crc kubenswrapper[4923]: I0321 04:44:24.005907 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2f47q/must-gather-45b6c" event={"ID":"7278e08b-b344-488f-9a42-25058a285212","Type":"ContainerStarted","Data":"ab7e4f18c81eee8c80a016c9977104b9c963ff7da868ccb90eb94ca2bbfb3eab"} Mar 21 04:44:24 crc kubenswrapper[4923]: I0321 04:44:24.005962 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2f47q/must-gather-45b6c" event={"ID":"7278e08b-b344-488f-9a42-25058a285212","Type":"ContainerStarted","Data":"6228a79dfe619903fc1b645dd794f40f45d1656df0814fe5dc23feda73eb1f5a"} Mar 21 04:44:24 crc kubenswrapper[4923]: I0321 04:44:24.005983 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2f47q/must-gather-45b6c" event={"ID":"7278e08b-b344-488f-9a42-25058a285212","Type":"ContainerStarted","Data":"3520ccbc2c86e5511a8941b1a26acb842f9dd4cf1c8e5fd3b22f387d5de2aa47"} Mar 21 04:44:24 crc kubenswrapper[4923]: I0321 04:44:24.023459 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2f47q/must-gather-45b6c" podStartSLOduration=2.023438299 podStartE2EDuration="2.023438299s" podCreationTimestamp="2026-03-21 04:44:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-21 04:44:24.021581497 +0000 UTC m=+1629.174592604" watchObservedRunningTime="2026-03-21 04:44:24.023438299 +0000 UTC m=+1629.176449396" Mar 21 04:44:30 crc kubenswrapper[4923]: I0321 04:44:30.359029 4923 scope.go:117] "RemoveContainer" containerID="1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04" Mar 21 04:44:30 crc kubenswrapper[4923]: E0321 04:44:30.359590 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cv5gr_openshift-machine-config-operator(34cdf206-b121-415c-ae40-21245192e724)\"" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" Mar 21 04:44:41 crc kubenswrapper[4923]: I0321 04:44:41.359026 4923 scope.go:117] "RemoveContainer" containerID="1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04" Mar 21 04:44:41 crc kubenswrapper[4923]: E0321 04:44:41.360016 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cv5gr_openshift-machine-config-operator(34cdf206-b121-415c-ae40-21245192e724)\"" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" Mar 21 04:44:54 crc kubenswrapper[4923]: I0321 04:44:54.360193 4923 scope.go:117] "RemoveContainer" containerID="1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04" Mar 21 04:44:54 crc kubenswrapper[4923]: E0321 04:44:54.361041 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cv5gr_openshift-machine-config-operator(34cdf206-b121-415c-ae40-21245192e724)\"" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" Mar 21 04:45:00 crc kubenswrapper[4923]: I0321 04:45:00.151926 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567805-g2xk4"] Mar 21 04:45:00 crc kubenswrapper[4923]: I0321 04:45:00.153877 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567805-g2xk4" Mar 21 04:45:00 crc kubenswrapper[4923]: I0321 04:45:00.157850 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 21 04:45:00 crc kubenswrapper[4923]: I0321 04:45:00.163965 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 21 04:45:00 crc kubenswrapper[4923]: I0321 04:45:00.165048 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567805-g2xk4"] Mar 21 04:45:00 crc kubenswrapper[4923]: I0321 04:45:00.180785 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d32bee7f-2e46-449d-974e-4dd111d02842-config-volume\") pod \"collect-profiles-29567805-g2xk4\" (UID: \"d32bee7f-2e46-449d-974e-4dd111d02842\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567805-g2xk4" Mar 21 04:45:00 crc kubenswrapper[4923]: I0321 04:45:00.180857 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zglcz\" (UniqueName: \"kubernetes.io/projected/d32bee7f-2e46-449d-974e-4dd111d02842-kube-api-access-zglcz\") pod \"collect-profiles-29567805-g2xk4\" (UID: \"d32bee7f-2e46-449d-974e-4dd111d02842\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567805-g2xk4" Mar 21 04:45:00 crc kubenswrapper[4923]: I0321 04:45:00.181005 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d32bee7f-2e46-449d-974e-4dd111d02842-secret-volume\") pod \"collect-profiles-29567805-g2xk4\" (UID: \"d32bee7f-2e46-449d-974e-4dd111d02842\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567805-g2xk4" Mar 21 04:45:00 crc kubenswrapper[4923]: I0321 04:45:00.281977 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d32bee7f-2e46-449d-974e-4dd111d02842-secret-volume\") pod \"collect-profiles-29567805-g2xk4\" (UID: \"d32bee7f-2e46-449d-974e-4dd111d02842\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567805-g2xk4" Mar 21 04:45:00 crc kubenswrapper[4923]: I0321 04:45:00.282988 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d32bee7f-2e46-449d-974e-4dd111d02842-config-volume\") pod \"collect-profiles-29567805-g2xk4\" (UID: \"d32bee7f-2e46-449d-974e-4dd111d02842\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567805-g2xk4" Mar 21 04:45:00 crc kubenswrapper[4923]: I0321 04:45:00.283038 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zglcz\" (UniqueName: \"kubernetes.io/projected/d32bee7f-2e46-449d-974e-4dd111d02842-kube-api-access-zglcz\") pod \"collect-profiles-29567805-g2xk4\" (UID: \"d32bee7f-2e46-449d-974e-4dd111d02842\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567805-g2xk4" Mar 21 04:45:00 crc kubenswrapper[4923]: I0321 04:45:00.283721 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d32bee7f-2e46-449d-974e-4dd111d02842-config-volume\") pod \"collect-profiles-29567805-g2xk4\" (UID: \"d32bee7f-2e46-449d-974e-4dd111d02842\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567805-g2xk4" Mar 21 04:45:00 crc kubenswrapper[4923]: I0321 04:45:00.288902 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d32bee7f-2e46-449d-974e-4dd111d02842-secret-volume\") pod \"collect-profiles-29567805-g2xk4\" (UID: \"d32bee7f-2e46-449d-974e-4dd111d02842\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567805-g2xk4" Mar 21 04:45:00 crc kubenswrapper[4923]: I0321 04:45:00.304069 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zglcz\" (UniqueName: \"kubernetes.io/projected/d32bee7f-2e46-449d-974e-4dd111d02842-kube-api-access-zglcz\") pod \"collect-profiles-29567805-g2xk4\" (UID: \"d32bee7f-2e46-449d-974e-4dd111d02842\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29567805-g2xk4" Mar 21 04:45:00 crc kubenswrapper[4923]: I0321 04:45:00.477594 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567805-g2xk4" Mar 21 04:45:00 crc kubenswrapper[4923]: I0321 04:45:00.684850 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29567805-g2xk4"] Mar 21 04:45:01 crc kubenswrapper[4923]: I0321 04:45:01.249548 4923 generic.go:334] "Generic (PLEG): container finished" podID="d32bee7f-2e46-449d-974e-4dd111d02842" containerID="d90022c01ba431480ed2bc2418572abd40343dd1381e697854b5c156dcd2a783" exitCode=0 Mar 21 04:45:01 crc kubenswrapper[4923]: I0321 04:45:01.249623 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567805-g2xk4" event={"ID":"d32bee7f-2e46-449d-974e-4dd111d02842","Type":"ContainerDied","Data":"d90022c01ba431480ed2bc2418572abd40343dd1381e697854b5c156dcd2a783"} Mar 21 04:45:01 crc kubenswrapper[4923]: I0321 04:45:01.249859 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567805-g2xk4" event={"ID":"d32bee7f-2e46-449d-974e-4dd111d02842","Type":"ContainerStarted","Data":"7df6275e7f48742c5d3d28334e35a98bb5b88109b3781207d19d9bf3350598e1"} Mar 21 04:45:02 crc kubenswrapper[4923]: I0321 04:45:02.531417 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567805-g2xk4" Mar 21 04:45:02 crc kubenswrapper[4923]: I0321 04:45:02.722059 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d32bee7f-2e46-449d-974e-4dd111d02842-config-volume\") pod \"d32bee7f-2e46-449d-974e-4dd111d02842\" (UID: \"d32bee7f-2e46-449d-974e-4dd111d02842\") " Mar 21 04:45:02 crc kubenswrapper[4923]: I0321 04:45:02.722549 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d32bee7f-2e46-449d-974e-4dd111d02842-secret-volume\") pod \"d32bee7f-2e46-449d-974e-4dd111d02842\" (UID: \"d32bee7f-2e46-449d-974e-4dd111d02842\") " Mar 21 04:45:02 crc kubenswrapper[4923]: I0321 04:45:02.722763 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zglcz\" (UniqueName: \"kubernetes.io/projected/d32bee7f-2e46-449d-974e-4dd111d02842-kube-api-access-zglcz\") pod \"d32bee7f-2e46-449d-974e-4dd111d02842\" (UID: \"d32bee7f-2e46-449d-974e-4dd111d02842\") " Mar 21 04:45:02 crc kubenswrapper[4923]: I0321 04:45:02.722804 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d32bee7f-2e46-449d-974e-4dd111d02842-config-volume" (OuterVolumeSpecName: "config-volume") pod "d32bee7f-2e46-449d-974e-4dd111d02842" (UID: "d32bee7f-2e46-449d-974e-4dd111d02842"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 04:45:02 crc kubenswrapper[4923]: I0321 04:45:02.723402 4923 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d32bee7f-2e46-449d-974e-4dd111d02842-config-volume\") on node \"crc\" DevicePath \"\"" Mar 21 04:45:02 crc kubenswrapper[4923]: I0321 04:45:02.731497 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d32bee7f-2e46-449d-974e-4dd111d02842-kube-api-access-zglcz" (OuterVolumeSpecName: "kube-api-access-zglcz") pod "d32bee7f-2e46-449d-974e-4dd111d02842" (UID: "d32bee7f-2e46-449d-974e-4dd111d02842"). InnerVolumeSpecName "kube-api-access-zglcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:45:02 crc kubenswrapper[4923]: I0321 04:45:02.737079 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d32bee7f-2e46-449d-974e-4dd111d02842-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d32bee7f-2e46-449d-974e-4dd111d02842" (UID: "d32bee7f-2e46-449d-974e-4dd111d02842"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 04:45:02 crc kubenswrapper[4923]: I0321 04:45:02.824640 4923 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d32bee7f-2e46-449d-974e-4dd111d02842-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 21 04:45:02 crc kubenswrapper[4923]: I0321 04:45:02.824686 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zglcz\" (UniqueName: \"kubernetes.io/projected/d32bee7f-2e46-449d-974e-4dd111d02842-kube-api-access-zglcz\") on node \"crc\" DevicePath \"\"" Mar 21 04:45:03 crc kubenswrapper[4923]: I0321 04:45:03.267898 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29567805-g2xk4" event={"ID":"d32bee7f-2e46-449d-974e-4dd111d02842","Type":"ContainerDied","Data":"7df6275e7f48742c5d3d28334e35a98bb5b88109b3781207d19d9bf3350598e1"} Mar 21 04:45:03 crc kubenswrapper[4923]: I0321 04:45:03.267942 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7df6275e7f48742c5d3d28334e35a98bb5b88109b3781207d19d9bf3350598e1" Mar 21 04:45:03 crc kubenswrapper[4923]: I0321 04:45:03.268038 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29567805-g2xk4" Mar 21 04:45:05 crc kubenswrapper[4923]: I0321 04:45:05.358784 4923 scope.go:117] "RemoveContainer" containerID="1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04" Mar 21 04:45:05 crc kubenswrapper[4923]: E0321 04:45:05.359472 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cv5gr_openshift-machine-config-operator(34cdf206-b121-415c-ae40-21245192e724)\"" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" Mar 21 04:45:16 crc kubenswrapper[4923]: I0321 04:45:16.178763 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-rwpdp_b5571552-4369-46f6-ad29-a54b1f4a7a8f/control-plane-machine-set-operator/0.log" Mar 21 04:45:16 crc kubenswrapper[4923]: I0321 04:45:16.334135 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-22lp8_baaa32c9-702b-4a43-a7b7-7a98272f80f3/kube-rbac-proxy/0.log" Mar 21 04:45:16 crc kubenswrapper[4923]: I0321 04:45:16.366375 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-22lp8_baaa32c9-702b-4a43-a7b7-7a98272f80f3/machine-api-operator/0.log" Mar 21 04:45:20 crc kubenswrapper[4923]: I0321 04:45:20.358289 4923 scope.go:117] "RemoveContainer" containerID="1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04" Mar 21 04:45:20 crc kubenswrapper[4923]: E0321 04:45:20.358948 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cv5gr_openshift-machine-config-operator(34cdf206-b121-415c-ae40-21245192e724)\"" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" Mar 21 04:45:33 crc kubenswrapper[4923]: I0321 04:45:33.359669 4923 scope.go:117] "RemoveContainer" containerID="1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04" Mar 21 04:45:33 crc kubenswrapper[4923]: E0321 04:45:33.360476 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cv5gr_openshift-machine-config-operator(34cdf206-b121-415c-ae40-21245192e724)\"" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" Mar 21 04:45:45 crc kubenswrapper[4923]: I0321 04:45:45.359129 4923 scope.go:117] "RemoveContainer" containerID="1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04" Mar 21 04:45:45 crc kubenswrapper[4923]: E0321 04:45:45.360359 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cv5gr_openshift-machine-config-operator(34cdf206-b121-415c-ae40-21245192e724)\"" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" Mar 21 04:45:46 crc kubenswrapper[4923]: I0321 04:45:46.851268 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-748tf_cc875221-d66d-43a1-83ab-42059357491d/controller/0.log" Mar 21 04:45:46 crc kubenswrapper[4923]: I0321 04:45:46.862714 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-748tf_cc875221-d66d-43a1-83ab-42059357491d/kube-rbac-proxy/0.log" Mar 21 04:45:47 crc kubenswrapper[4923]: I0321 04:45:47.028251 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/cp-frr-files/0.log" Mar 21 04:45:47 crc kubenswrapper[4923]: I0321 04:45:47.220499 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/cp-frr-files/0.log" Mar 21 04:45:47 crc kubenswrapper[4923]: I0321 04:45:47.224326 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/cp-metrics/0.log" Mar 21 04:45:47 crc kubenswrapper[4923]: I0321 04:45:47.238129 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/cp-reloader/0.log" Mar 21 04:45:47 crc kubenswrapper[4923]: I0321 04:45:47.343176 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/cp-reloader/0.log" Mar 21 04:45:47 crc kubenswrapper[4923]: I0321 04:45:47.485170 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/cp-reloader/0.log" Mar 21 04:45:47 crc kubenswrapper[4923]: I0321 04:45:47.489280 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/cp-frr-files/0.log" Mar 21 04:45:47 crc kubenswrapper[4923]: I0321 04:45:47.522930 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/cp-metrics/0.log" Mar 21 04:45:47 crc kubenswrapper[4923]: I0321 04:45:47.555221 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/cp-metrics/0.log" Mar 21 04:45:47 crc kubenswrapper[4923]: I0321 04:45:47.734242 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/controller/0.log" Mar 21 04:45:47 crc kubenswrapper[4923]: I0321 04:45:47.748828 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/cp-metrics/0.log" Mar 21 04:45:47 crc kubenswrapper[4923]: I0321 04:45:47.750808 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/cp-frr-files/0.log" Mar 21 04:45:47 crc kubenswrapper[4923]: I0321 04:45:47.798535 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/cp-reloader/0.log" Mar 21 04:45:47 crc kubenswrapper[4923]: I0321 04:45:47.923273 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/kube-rbac-proxy/0.log" Mar 21 04:45:47 crc kubenswrapper[4923]: I0321 04:45:47.986979 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/frr-metrics/0.log" Mar 21 04:45:47 crc kubenswrapper[4923]: I0321 04:45:47.988444 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/kube-rbac-proxy-frr/0.log" Mar 21 04:45:48 crc kubenswrapper[4923]: I0321 04:45:48.147202 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/reloader/0.log" Mar 21 04:45:48 crc kubenswrapper[4923]: I0321 04:45:48.186455 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-6t644_a9dca852-085f-4e4a-9ded-ffb15aada6cb/frr-k8s-webhook-server/0.log" Mar 21 04:45:48 crc kubenswrapper[4923]: I0321 04:45:48.221181 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l7zhk_11798d91-f8f0-4ba6-9386-b0876b78d927/frr/0.log" Mar 21 04:45:48 crc kubenswrapper[4923]: I0321 04:45:48.365436 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-fd8f45f-g4r57_99dd2fb0-56d7-40c7-836f-2f004f9dc676/manager/0.log" Mar 21 04:45:48 crc kubenswrapper[4923]: I0321 04:45:48.422638 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5b974f9ffb-m2lc4_ac4f10a7-8f47-40e1-9ca2-6f401c588c64/webhook-server/0.log" Mar 21 04:45:48 crc kubenswrapper[4923]: I0321 04:45:48.572682 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rw9p8_8f2886c8-0371-44b8-b2bc-59dfd3a193f6/kube-rbac-proxy/0.log" Mar 21 04:45:48 crc kubenswrapper[4923]: I0321 04:45:48.637946 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rw9p8_8f2886c8-0371-44b8-b2bc-59dfd3a193f6/speaker/0.log" Mar 21 04:45:58 crc kubenswrapper[4923]: I0321 04:45:58.359092 4923 scope.go:117] "RemoveContainer" containerID="1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04" Mar 21 04:45:58 crc kubenswrapper[4923]: E0321 04:45:58.360051 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cv5gr_openshift-machine-config-operator(34cdf206-b121-415c-ae40-21245192e724)\"" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" Mar 21 04:46:00 crc kubenswrapper[4923]: I0321 04:46:00.138302 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567806-dmk97"] Mar 21 04:46:00 crc kubenswrapper[4923]: E0321 04:46:00.138750 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d32bee7f-2e46-449d-974e-4dd111d02842" containerName="collect-profiles" Mar 21 04:46:00 crc kubenswrapper[4923]: I0321 04:46:00.138780 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="d32bee7f-2e46-449d-974e-4dd111d02842" containerName="collect-profiles" Mar 21 04:46:00 crc kubenswrapper[4923]: I0321 04:46:00.139037 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="d32bee7f-2e46-449d-974e-4dd111d02842" containerName="collect-profiles" Mar 21 04:46:00 crc kubenswrapper[4923]: I0321 04:46:00.139912 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567806-dmk97" Mar 21 04:46:00 crc kubenswrapper[4923]: I0321 04:46:00.142297 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 04:46:00 crc kubenswrapper[4923]: I0321 04:46:00.142605 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 04:46:00 crc kubenswrapper[4923]: I0321 04:46:00.142739 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8trfl" Mar 21 04:46:00 crc kubenswrapper[4923]: I0321 04:46:00.150954 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567806-dmk97"] Mar 21 04:46:00 crc kubenswrapper[4923]: I0321 04:46:00.191776 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhr9p\" (UniqueName: \"kubernetes.io/projected/350b99e7-def0-475a-8c05-b43efa181f19-kube-api-access-qhr9p\") pod \"auto-csr-approver-29567806-dmk97\" (UID: \"350b99e7-def0-475a-8c05-b43efa181f19\") " pod="openshift-infra/auto-csr-approver-29567806-dmk97" Mar 21 04:46:00 crc kubenswrapper[4923]: I0321 04:46:00.293165 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhr9p\" (UniqueName: \"kubernetes.io/projected/350b99e7-def0-475a-8c05-b43efa181f19-kube-api-access-qhr9p\") pod \"auto-csr-approver-29567806-dmk97\" (UID: \"350b99e7-def0-475a-8c05-b43efa181f19\") " pod="openshift-infra/auto-csr-approver-29567806-dmk97" Mar 21 04:46:00 crc kubenswrapper[4923]: I0321 04:46:00.313421 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhr9p\" (UniqueName: \"kubernetes.io/projected/350b99e7-def0-475a-8c05-b43efa181f19-kube-api-access-qhr9p\") pod \"auto-csr-approver-29567806-dmk97\" (UID: \"350b99e7-def0-475a-8c05-b43efa181f19\") " pod="openshift-infra/auto-csr-approver-29567806-dmk97" Mar 21 04:46:00 crc kubenswrapper[4923]: I0321 04:46:00.482569 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567806-dmk97" Mar 21 04:46:00 crc kubenswrapper[4923]: I0321 04:46:00.731196 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567806-dmk97"] Mar 21 04:46:01 crc kubenswrapper[4923]: I0321 04:46:01.647056 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567806-dmk97" event={"ID":"350b99e7-def0-475a-8c05-b43efa181f19","Type":"ContainerStarted","Data":"70e21db92db37a54c09de3c7dd7c292d26ad63e6b6e437499157d73f6cd41621"} Mar 21 04:46:02 crc kubenswrapper[4923]: I0321 04:46:02.656119 4923 generic.go:334] "Generic (PLEG): container finished" podID="350b99e7-def0-475a-8c05-b43efa181f19" containerID="66248429c2531a8e3f0e22efb222d7050e096ff3e585938772a560dae99799af" exitCode=0 Mar 21 04:46:02 crc kubenswrapper[4923]: I0321 04:46:02.656226 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567806-dmk97" event={"ID":"350b99e7-def0-475a-8c05-b43efa181f19","Type":"ContainerDied","Data":"66248429c2531a8e3f0e22efb222d7050e096ff3e585938772a560dae99799af"} Mar 21 04:46:03 crc kubenswrapper[4923]: I0321 04:46:03.931126 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567806-dmk97" Mar 21 04:46:04 crc kubenswrapper[4923]: I0321 04:46:04.045511 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhr9p\" (UniqueName: \"kubernetes.io/projected/350b99e7-def0-475a-8c05-b43efa181f19-kube-api-access-qhr9p\") pod \"350b99e7-def0-475a-8c05-b43efa181f19\" (UID: \"350b99e7-def0-475a-8c05-b43efa181f19\") " Mar 21 04:46:04 crc kubenswrapper[4923]: I0321 04:46:04.054156 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/350b99e7-def0-475a-8c05-b43efa181f19-kube-api-access-qhr9p" (OuterVolumeSpecName: "kube-api-access-qhr9p") pod "350b99e7-def0-475a-8c05-b43efa181f19" (UID: "350b99e7-def0-475a-8c05-b43efa181f19"). InnerVolumeSpecName "kube-api-access-qhr9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:46:04 crc kubenswrapper[4923]: I0321 04:46:04.147511 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhr9p\" (UniqueName: \"kubernetes.io/projected/350b99e7-def0-475a-8c05-b43efa181f19-kube-api-access-qhr9p\") on node \"crc\" DevicePath \"\"" Mar 21 04:46:04 crc kubenswrapper[4923]: I0321 04:46:04.669785 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567806-dmk97" event={"ID":"350b99e7-def0-475a-8c05-b43efa181f19","Type":"ContainerDied","Data":"70e21db92db37a54c09de3c7dd7c292d26ad63e6b6e437499157d73f6cd41621"} Mar 21 04:46:04 crc kubenswrapper[4923]: I0321 04:46:04.670123 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70e21db92db37a54c09de3c7dd7c292d26ad63e6b6e437499157d73f6cd41621" Mar 21 04:46:04 crc kubenswrapper[4923]: I0321 04:46:04.669873 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567806-dmk97" Mar 21 04:46:05 crc kubenswrapper[4923]: I0321 04:46:05.004713 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567800-tgmnk"] Mar 21 04:46:05 crc kubenswrapper[4923]: I0321 04:46:05.015858 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567800-tgmnk"] Mar 21 04:46:06 crc kubenswrapper[4923]: I0321 04:46:06.371850 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a54b1a37-c2a5-4bfc-a2e2-349ddb2ab559" path="/var/lib/kubelet/pods/a54b1a37-c2a5-4bfc-a2e2-349ddb2ab559/volumes" Mar 21 04:46:10 crc kubenswrapper[4923]: I0321 04:46:10.358762 4923 scope.go:117] "RemoveContainer" containerID="1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04" Mar 21 04:46:10 crc kubenswrapper[4923]: E0321 04:46:10.359186 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cv5gr_openshift-machine-config-operator(34cdf206-b121-415c-ae40-21245192e724)\"" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" Mar 21 04:46:16 crc kubenswrapper[4923]: I0321 04:46:16.344960 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9_5079aa2d-ce9f-4e98-bc7b-48fcb327a98f/util/0.log" Mar 21 04:46:16 crc kubenswrapper[4923]: I0321 04:46:16.499302 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9_5079aa2d-ce9f-4e98-bc7b-48fcb327a98f/pull/0.log" Mar 21 04:46:16 crc kubenswrapper[4923]: I0321 04:46:16.509523 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9_5079aa2d-ce9f-4e98-bc7b-48fcb327a98f/util/0.log" Mar 21 04:46:16 crc kubenswrapper[4923]: I0321 04:46:16.517596 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9_5079aa2d-ce9f-4e98-bc7b-48fcb327a98f/pull/0.log" Mar 21 04:46:16 crc kubenswrapper[4923]: I0321 04:46:16.712203 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9_5079aa2d-ce9f-4e98-bc7b-48fcb327a98f/pull/0.log" Mar 21 04:46:16 crc kubenswrapper[4923]: I0321 04:46:16.733616 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9_5079aa2d-ce9f-4e98-bc7b-48fcb327a98f/extract/0.log" Mar 21 04:46:16 crc kubenswrapper[4923]: I0321 04:46:16.750782 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1qlst9_5079aa2d-ce9f-4e98-bc7b-48fcb327a98f/util/0.log" Mar 21 04:46:16 crc kubenswrapper[4923]: I0321 04:46:16.871875 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tcvnp_00194538-9e59-4093-b0a7-be2801bcef80/extract-utilities/0.log" Mar 21 04:46:17 crc kubenswrapper[4923]: I0321 04:46:17.038684 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tcvnp_00194538-9e59-4093-b0a7-be2801bcef80/extract-utilities/0.log" Mar 21 04:46:17 crc kubenswrapper[4923]: I0321 04:46:17.047149 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tcvnp_00194538-9e59-4093-b0a7-be2801bcef80/extract-content/0.log" Mar 21 04:46:17 crc kubenswrapper[4923]: I0321 04:46:17.049251 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tcvnp_00194538-9e59-4093-b0a7-be2801bcef80/extract-content/0.log" Mar 21 04:46:17 crc kubenswrapper[4923]: I0321 04:46:17.203353 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tcvnp_00194538-9e59-4093-b0a7-be2801bcef80/extract-utilities/0.log" Mar 21 04:46:17 crc kubenswrapper[4923]: I0321 04:46:17.222014 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tcvnp_00194538-9e59-4093-b0a7-be2801bcef80/extract-content/0.log" Mar 21 04:46:17 crc kubenswrapper[4923]: I0321 04:46:17.394617 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ckfp6_db0b017b-07ab-4c9a-b9ae-2111b970e1fe/extract-utilities/0.log" Mar 21 04:46:17 crc kubenswrapper[4923]: I0321 04:46:17.543158 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tcvnp_00194538-9e59-4093-b0a7-be2801bcef80/registry-server/0.log" Mar 21 04:46:17 crc kubenswrapper[4923]: I0321 04:46:17.547366 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ckfp6_db0b017b-07ab-4c9a-b9ae-2111b970e1fe/extract-utilities/0.log" Mar 21 04:46:17 crc kubenswrapper[4923]: I0321 04:46:17.618648 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ckfp6_db0b017b-07ab-4c9a-b9ae-2111b970e1fe/extract-content/0.log" Mar 21 04:46:17 crc kubenswrapper[4923]: I0321 04:46:17.628344 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ckfp6_db0b017b-07ab-4c9a-b9ae-2111b970e1fe/extract-content/0.log" Mar 21 04:46:17 crc kubenswrapper[4923]: I0321 04:46:17.727438 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ckfp6_db0b017b-07ab-4c9a-b9ae-2111b970e1fe/extract-utilities/0.log" Mar 21 04:46:17 crc kubenswrapper[4923]: I0321 04:46:17.730697 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ckfp6_db0b017b-07ab-4c9a-b9ae-2111b970e1fe/extract-content/0.log" Mar 21 04:46:17 crc kubenswrapper[4923]: I0321 04:46:17.952614 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-nj2n7_bba19ab1-fbf2-4a6f-a481-45e06896f9cd/marketplace-operator/0.log" Mar 21 04:46:18 crc kubenswrapper[4923]: I0321 04:46:18.009242 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9rmdl_c9c3cd1d-3b39-4990-9cfc-bbadb41f837e/extract-utilities/0.log" Mar 21 04:46:18 crc kubenswrapper[4923]: I0321 04:46:18.211390 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9rmdl_c9c3cd1d-3b39-4990-9cfc-bbadb41f837e/extract-content/0.log" Mar 21 04:46:18 crc kubenswrapper[4923]: I0321 04:46:18.256498 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9rmdl_c9c3cd1d-3b39-4990-9cfc-bbadb41f837e/extract-utilities/0.log" Mar 21 04:46:18 crc kubenswrapper[4923]: I0321 04:46:18.258251 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9rmdl_c9c3cd1d-3b39-4990-9cfc-bbadb41f837e/extract-content/0.log" Mar 21 04:46:18 crc kubenswrapper[4923]: I0321 04:46:18.263604 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ckfp6_db0b017b-07ab-4c9a-b9ae-2111b970e1fe/registry-server/0.log" Mar 21 04:46:18 crc kubenswrapper[4923]: I0321 04:46:18.366033 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9rmdl_c9c3cd1d-3b39-4990-9cfc-bbadb41f837e/extract-utilities/0.log" Mar 21 04:46:18 crc kubenswrapper[4923]: I0321 04:46:18.410037 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9rmdl_c9c3cd1d-3b39-4990-9cfc-bbadb41f837e/extract-content/0.log" Mar 21 04:46:18 crc kubenswrapper[4923]: I0321 04:46:18.505394 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9rmdl_c9c3cd1d-3b39-4990-9cfc-bbadb41f837e/registry-server/0.log" Mar 21 04:46:18 crc kubenswrapper[4923]: I0321 04:46:18.559866 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2s8wr_d20826ac-d354-4e18-ba8e-affcf49ed187/extract-utilities/0.log" Mar 21 04:46:18 crc kubenswrapper[4923]: I0321 04:46:18.701361 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2s8wr_d20826ac-d354-4e18-ba8e-affcf49ed187/extract-content/0.log" Mar 21 04:46:18 crc kubenswrapper[4923]: I0321 04:46:18.709062 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2s8wr_d20826ac-d354-4e18-ba8e-affcf49ed187/extract-content/0.log" Mar 21 04:46:18 crc kubenswrapper[4923]: I0321 04:46:18.711884 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2s8wr_d20826ac-d354-4e18-ba8e-affcf49ed187/extract-utilities/0.log" Mar 21 04:46:18 crc kubenswrapper[4923]: I0321 04:46:18.881969 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2s8wr_d20826ac-d354-4e18-ba8e-affcf49ed187/extract-utilities/0.log" Mar 21 04:46:18 crc kubenswrapper[4923]: I0321 04:46:18.882015 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2s8wr_d20826ac-d354-4e18-ba8e-affcf49ed187/extract-content/0.log" Mar 21 04:46:19 crc kubenswrapper[4923]: I0321 04:46:19.054920 4923 scope.go:117] "RemoveContainer" containerID="da81ca060dfc40a6f45f2390b567dc6f26309950f9cc7440daf0e07dd0a1d45a" Mar 21 04:46:19 crc kubenswrapper[4923]: I0321 04:46:19.145078 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2s8wr_d20826ac-d354-4e18-ba8e-affcf49ed187/registry-server/0.log" Mar 21 04:46:23 crc kubenswrapper[4923]: I0321 04:46:23.358158 4923 scope.go:117] "RemoveContainer" containerID="1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04" Mar 21 04:46:23 crc kubenswrapper[4923]: E0321 04:46:23.358732 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cv5gr_openshift-machine-config-operator(34cdf206-b121-415c-ae40-21245192e724)\"" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" Mar 21 04:46:35 crc kubenswrapper[4923]: I0321 04:46:35.359179 4923 scope.go:117] "RemoveContainer" containerID="1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04" Mar 21 04:46:35 crc kubenswrapper[4923]: E0321 04:46:35.360406 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cv5gr_openshift-machine-config-operator(34cdf206-b121-415c-ae40-21245192e724)\"" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" Mar 21 04:46:46 crc kubenswrapper[4923]: I0321 04:46:46.363416 4923 scope.go:117] "RemoveContainer" containerID="1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04" Mar 21 04:46:46 crc kubenswrapper[4923]: E0321 04:46:46.365028 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cv5gr_openshift-machine-config-operator(34cdf206-b121-415c-ae40-21245192e724)\"" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" Mar 21 04:46:58 crc kubenswrapper[4923]: I0321 04:46:58.359504 4923 scope.go:117] "RemoveContainer" containerID="1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04" Mar 21 04:46:58 crc kubenswrapper[4923]: E0321 04:46:58.360822 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cv5gr_openshift-machine-config-operator(34cdf206-b121-415c-ae40-21245192e724)\"" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" Mar 21 04:47:10 crc kubenswrapper[4923]: I0321 04:47:10.358944 4923 scope.go:117] "RemoveContainer" containerID="1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04" Mar 21 04:47:10 crc kubenswrapper[4923]: E0321 04:47:10.359649 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cv5gr_openshift-machine-config-operator(34cdf206-b121-415c-ae40-21245192e724)\"" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" Mar 21 04:47:24 crc kubenswrapper[4923]: I0321 04:47:24.359312 4923 scope.go:117] "RemoveContainer" containerID="1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04" Mar 21 04:47:24 crc kubenswrapper[4923]: E0321 04:47:24.361464 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cv5gr_openshift-machine-config-operator(34cdf206-b121-415c-ae40-21245192e724)\"" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" Mar 21 04:47:31 crc kubenswrapper[4923]: I0321 04:47:31.433221 4923 generic.go:334] "Generic (PLEG): container finished" podID="7278e08b-b344-488f-9a42-25058a285212" containerID="6228a79dfe619903fc1b645dd794f40f45d1656df0814fe5dc23feda73eb1f5a" exitCode=0 Mar 21 04:47:31 crc kubenswrapper[4923]: I0321 04:47:31.433302 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2f47q/must-gather-45b6c" event={"ID":"7278e08b-b344-488f-9a42-25058a285212","Type":"ContainerDied","Data":"6228a79dfe619903fc1b645dd794f40f45d1656df0814fe5dc23feda73eb1f5a"} Mar 21 04:47:31 crc kubenswrapper[4923]: I0321 04:47:31.433880 4923 scope.go:117] "RemoveContainer" containerID="6228a79dfe619903fc1b645dd794f40f45d1656df0814fe5dc23feda73eb1f5a" Mar 21 04:47:31 crc kubenswrapper[4923]: I0321 04:47:31.925504 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2f47q_must-gather-45b6c_7278e08b-b344-488f-9a42-25058a285212/gather/0.log" Mar 21 04:47:35 crc kubenswrapper[4923]: I0321 04:47:35.358618 4923 scope.go:117] "RemoveContainer" containerID="1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04" Mar 21 04:47:35 crc kubenswrapper[4923]: E0321 04:47:35.359723 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cv5gr_openshift-machine-config-operator(34cdf206-b121-415c-ae40-21245192e724)\"" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" Mar 21 04:47:41 crc kubenswrapper[4923]: I0321 04:47:41.573208 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2f47q/must-gather-45b6c"] Mar 21 04:47:41 crc kubenswrapper[4923]: I0321 04:47:41.574204 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-2f47q/must-gather-45b6c" podUID="7278e08b-b344-488f-9a42-25058a285212" containerName="copy" containerID="cri-o://ab7e4f18c81eee8c80a016c9977104b9c963ff7da868ccb90eb94ca2bbfb3eab" gracePeriod=2 Mar 21 04:47:41 crc kubenswrapper[4923]: I0321 04:47:41.577629 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2f47q/must-gather-45b6c"] Mar 21 04:47:41 crc kubenswrapper[4923]: I0321 04:47:41.901791 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2f47q_must-gather-45b6c_7278e08b-b344-488f-9a42-25058a285212/copy/0.log" Mar 21 04:47:41 crc kubenswrapper[4923]: I0321 04:47:41.902490 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2f47q/must-gather-45b6c" Mar 21 04:47:42 crc kubenswrapper[4923]: I0321 04:47:42.075492 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq487\" (UniqueName: \"kubernetes.io/projected/7278e08b-b344-488f-9a42-25058a285212-kube-api-access-xq487\") pod \"7278e08b-b344-488f-9a42-25058a285212\" (UID: \"7278e08b-b344-488f-9a42-25058a285212\") " Mar 21 04:47:42 crc kubenswrapper[4923]: I0321 04:47:42.075551 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7278e08b-b344-488f-9a42-25058a285212-must-gather-output\") pod \"7278e08b-b344-488f-9a42-25058a285212\" (UID: \"7278e08b-b344-488f-9a42-25058a285212\") " Mar 21 04:47:42 crc kubenswrapper[4923]: I0321 04:47:42.086137 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7278e08b-b344-488f-9a42-25058a285212-kube-api-access-xq487" (OuterVolumeSpecName: "kube-api-access-xq487") pod "7278e08b-b344-488f-9a42-25058a285212" (UID: "7278e08b-b344-488f-9a42-25058a285212"). InnerVolumeSpecName "kube-api-access-xq487". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:47:42 crc kubenswrapper[4923]: I0321 04:47:42.151346 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7278e08b-b344-488f-9a42-25058a285212-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "7278e08b-b344-488f-9a42-25058a285212" (UID: "7278e08b-b344-488f-9a42-25058a285212"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:47:42 crc kubenswrapper[4923]: I0321 04:47:42.176852 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq487\" (UniqueName: \"kubernetes.io/projected/7278e08b-b344-488f-9a42-25058a285212-kube-api-access-xq487\") on node \"crc\" DevicePath \"\"" Mar 21 04:47:42 crc kubenswrapper[4923]: I0321 04:47:42.176886 4923 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7278e08b-b344-488f-9a42-25058a285212-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 21 04:47:42 crc kubenswrapper[4923]: I0321 04:47:42.373118 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7278e08b-b344-488f-9a42-25058a285212" path="/var/lib/kubelet/pods/7278e08b-b344-488f-9a42-25058a285212/volumes" Mar 21 04:47:42 crc kubenswrapper[4923]: I0321 04:47:42.537052 4923 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2f47q_must-gather-45b6c_7278e08b-b344-488f-9a42-25058a285212/copy/0.log" Mar 21 04:47:42 crc kubenswrapper[4923]: I0321 04:47:42.537647 4923 generic.go:334] "Generic (PLEG): container finished" podID="7278e08b-b344-488f-9a42-25058a285212" containerID="ab7e4f18c81eee8c80a016c9977104b9c963ff7da868ccb90eb94ca2bbfb3eab" exitCode=143 Mar 21 04:47:42 crc kubenswrapper[4923]: I0321 04:47:42.537728 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2f47q/must-gather-45b6c" Mar 21 04:47:42 crc kubenswrapper[4923]: I0321 04:47:42.537738 4923 scope.go:117] "RemoveContainer" containerID="ab7e4f18c81eee8c80a016c9977104b9c963ff7da868ccb90eb94ca2bbfb3eab" Mar 21 04:47:42 crc kubenswrapper[4923]: I0321 04:47:42.838924 4923 scope.go:117] "RemoveContainer" containerID="6228a79dfe619903fc1b645dd794f40f45d1656df0814fe5dc23feda73eb1f5a" Mar 21 04:47:42 crc kubenswrapper[4923]: I0321 04:47:42.883111 4923 scope.go:117] "RemoveContainer" containerID="ab7e4f18c81eee8c80a016c9977104b9c963ff7da868ccb90eb94ca2bbfb3eab" Mar 21 04:47:42 crc kubenswrapper[4923]: E0321 04:47:42.884206 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab7e4f18c81eee8c80a016c9977104b9c963ff7da868ccb90eb94ca2bbfb3eab\": container with ID starting with ab7e4f18c81eee8c80a016c9977104b9c963ff7da868ccb90eb94ca2bbfb3eab not found: ID does not exist" containerID="ab7e4f18c81eee8c80a016c9977104b9c963ff7da868ccb90eb94ca2bbfb3eab" Mar 21 04:47:42 crc kubenswrapper[4923]: I0321 04:47:42.884252 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab7e4f18c81eee8c80a016c9977104b9c963ff7da868ccb90eb94ca2bbfb3eab"} err="failed to get container status \"ab7e4f18c81eee8c80a016c9977104b9c963ff7da868ccb90eb94ca2bbfb3eab\": rpc error: code = NotFound desc = could not find container \"ab7e4f18c81eee8c80a016c9977104b9c963ff7da868ccb90eb94ca2bbfb3eab\": container with ID starting with ab7e4f18c81eee8c80a016c9977104b9c963ff7da868ccb90eb94ca2bbfb3eab not found: ID does not exist" Mar 21 04:47:42 crc kubenswrapper[4923]: I0321 04:47:42.884288 4923 scope.go:117] "RemoveContainer" containerID="6228a79dfe619903fc1b645dd794f40f45d1656df0814fe5dc23feda73eb1f5a" Mar 21 04:47:42 crc kubenswrapper[4923]: E0321 04:47:42.884644 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6228a79dfe619903fc1b645dd794f40f45d1656df0814fe5dc23feda73eb1f5a\": container with ID starting with 6228a79dfe619903fc1b645dd794f40f45d1656df0814fe5dc23feda73eb1f5a not found: ID does not exist" containerID="6228a79dfe619903fc1b645dd794f40f45d1656df0814fe5dc23feda73eb1f5a" Mar 21 04:47:42 crc kubenswrapper[4923]: I0321 04:47:42.884681 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6228a79dfe619903fc1b645dd794f40f45d1656df0814fe5dc23feda73eb1f5a"} err="failed to get container status \"6228a79dfe619903fc1b645dd794f40f45d1656df0814fe5dc23feda73eb1f5a\": rpc error: code = NotFound desc = could not find container \"6228a79dfe619903fc1b645dd794f40f45d1656df0814fe5dc23feda73eb1f5a\": container with ID starting with 6228a79dfe619903fc1b645dd794f40f45d1656df0814fe5dc23feda73eb1f5a not found: ID does not exist" Mar 21 04:47:48 crc kubenswrapper[4923]: I0321 04:47:48.358958 4923 scope.go:117] "RemoveContainer" containerID="1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04" Mar 21 04:47:48 crc kubenswrapper[4923]: E0321 04:47:48.361142 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cv5gr_openshift-machine-config-operator(34cdf206-b121-415c-ae40-21245192e724)\"" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" Mar 21 04:48:00 crc kubenswrapper[4923]: I0321 04:48:00.156548 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567808-d2fbf"] Mar 21 04:48:00 crc kubenswrapper[4923]: E0321 04:48:00.157495 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7278e08b-b344-488f-9a42-25058a285212" containerName="copy" Mar 21 04:48:00 crc kubenswrapper[4923]: I0321 04:48:00.157515 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="7278e08b-b344-488f-9a42-25058a285212" containerName="copy" Mar 21 04:48:00 crc kubenswrapper[4923]: E0321 04:48:00.157542 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="350b99e7-def0-475a-8c05-b43efa181f19" containerName="oc" Mar 21 04:48:00 crc kubenswrapper[4923]: I0321 04:48:00.157554 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="350b99e7-def0-475a-8c05-b43efa181f19" containerName="oc" Mar 21 04:48:00 crc kubenswrapper[4923]: E0321 04:48:00.157587 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7278e08b-b344-488f-9a42-25058a285212" containerName="gather" Mar 21 04:48:00 crc kubenswrapper[4923]: I0321 04:48:00.157602 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="7278e08b-b344-488f-9a42-25058a285212" containerName="gather" Mar 21 04:48:00 crc kubenswrapper[4923]: I0321 04:48:00.157772 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="350b99e7-def0-475a-8c05-b43efa181f19" containerName="oc" Mar 21 04:48:00 crc kubenswrapper[4923]: I0321 04:48:00.157790 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="7278e08b-b344-488f-9a42-25058a285212" containerName="gather" Mar 21 04:48:00 crc kubenswrapper[4923]: I0321 04:48:00.157804 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="7278e08b-b344-488f-9a42-25058a285212" containerName="copy" Mar 21 04:48:00 crc kubenswrapper[4923]: I0321 04:48:00.158383 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567808-d2fbf" Mar 21 04:48:00 crc kubenswrapper[4923]: I0321 04:48:00.165850 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567808-d2fbf"] Mar 21 04:48:00 crc kubenswrapper[4923]: I0321 04:48:00.167395 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 04:48:00 crc kubenswrapper[4923]: I0321 04:48:00.168429 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8trfl" Mar 21 04:48:00 crc kubenswrapper[4923]: I0321 04:48:00.168587 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 04:48:00 crc kubenswrapper[4923]: I0321 04:48:00.327891 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnv5c\" (UniqueName: \"kubernetes.io/projected/0b2dd808-71e6-4ab5-99a6-e075c83dbcb0-kube-api-access-dnv5c\") pod \"auto-csr-approver-29567808-d2fbf\" (UID: \"0b2dd808-71e6-4ab5-99a6-e075c83dbcb0\") " pod="openshift-infra/auto-csr-approver-29567808-d2fbf" Mar 21 04:48:00 crc kubenswrapper[4923]: I0321 04:48:00.429191 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnv5c\" (UniqueName: \"kubernetes.io/projected/0b2dd808-71e6-4ab5-99a6-e075c83dbcb0-kube-api-access-dnv5c\") pod \"auto-csr-approver-29567808-d2fbf\" (UID: \"0b2dd808-71e6-4ab5-99a6-e075c83dbcb0\") " pod="openshift-infra/auto-csr-approver-29567808-d2fbf" Mar 21 04:48:00 crc kubenswrapper[4923]: I0321 04:48:00.464093 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnv5c\" (UniqueName: \"kubernetes.io/projected/0b2dd808-71e6-4ab5-99a6-e075c83dbcb0-kube-api-access-dnv5c\") pod \"auto-csr-approver-29567808-d2fbf\" (UID: \"0b2dd808-71e6-4ab5-99a6-e075c83dbcb0\") " pod="openshift-infra/auto-csr-approver-29567808-d2fbf" Mar 21 04:48:00 crc kubenswrapper[4923]: I0321 04:48:00.485026 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567808-d2fbf" Mar 21 04:48:00 crc kubenswrapper[4923]: I0321 04:48:00.994618 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567808-d2fbf"] Mar 21 04:48:01 crc kubenswrapper[4923]: W0321 04:48:01.004640 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b2dd808_71e6_4ab5_99a6_e075c83dbcb0.slice/crio-c5b7e0fd031b26526f6d7a239811cf1b6d434ad2bbf6fbbd6ca4a593b6be76ab WatchSource:0}: Error finding container c5b7e0fd031b26526f6d7a239811cf1b6d434ad2bbf6fbbd6ca4a593b6be76ab: Status 404 returned error can't find the container with id c5b7e0fd031b26526f6d7a239811cf1b6d434ad2bbf6fbbd6ca4a593b6be76ab Mar 21 04:48:01 crc kubenswrapper[4923]: I0321 04:48:01.358264 4923 scope.go:117] "RemoveContainer" containerID="1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04" Mar 21 04:48:01 crc kubenswrapper[4923]: E0321 04:48:01.358581 4923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cv5gr_openshift-machine-config-operator(34cdf206-b121-415c-ae40-21245192e724)\"" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" Mar 21 04:48:01 crc kubenswrapper[4923]: I0321 04:48:01.676982 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567808-d2fbf" event={"ID":"0b2dd808-71e6-4ab5-99a6-e075c83dbcb0","Type":"ContainerStarted","Data":"c5b7e0fd031b26526f6d7a239811cf1b6d434ad2bbf6fbbd6ca4a593b6be76ab"} Mar 21 04:48:02 crc kubenswrapper[4923]: I0321 04:48:02.686894 4923 generic.go:334] "Generic (PLEG): container finished" podID="0b2dd808-71e6-4ab5-99a6-e075c83dbcb0" containerID="86acc7b0d8e14c2c2ca11e23892502ca6d568570015822c8a36ccafd05a9f30c" exitCode=0 Mar 21 04:48:02 crc kubenswrapper[4923]: I0321 04:48:02.687215 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567808-d2fbf" event={"ID":"0b2dd808-71e6-4ab5-99a6-e075c83dbcb0","Type":"ContainerDied","Data":"86acc7b0d8e14c2c2ca11e23892502ca6d568570015822c8a36ccafd05a9f30c"} Mar 21 04:48:03 crc kubenswrapper[4923]: I0321 04:48:03.910240 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567808-d2fbf" Mar 21 04:48:03 crc kubenswrapper[4923]: I0321 04:48:03.997521 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnv5c\" (UniqueName: \"kubernetes.io/projected/0b2dd808-71e6-4ab5-99a6-e075c83dbcb0-kube-api-access-dnv5c\") pod \"0b2dd808-71e6-4ab5-99a6-e075c83dbcb0\" (UID: \"0b2dd808-71e6-4ab5-99a6-e075c83dbcb0\") " Mar 21 04:48:04 crc kubenswrapper[4923]: I0321 04:48:04.005347 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b2dd808-71e6-4ab5-99a6-e075c83dbcb0-kube-api-access-dnv5c" (OuterVolumeSpecName: "kube-api-access-dnv5c") pod "0b2dd808-71e6-4ab5-99a6-e075c83dbcb0" (UID: "0b2dd808-71e6-4ab5-99a6-e075c83dbcb0"). InnerVolumeSpecName "kube-api-access-dnv5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:48:04 crc kubenswrapper[4923]: I0321 04:48:04.099168 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnv5c\" (UniqueName: \"kubernetes.io/projected/0b2dd808-71e6-4ab5-99a6-e075c83dbcb0-kube-api-access-dnv5c\") on node \"crc\" DevicePath \"\"" Mar 21 04:48:04 crc kubenswrapper[4923]: I0321 04:48:04.705831 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567808-d2fbf" event={"ID":"0b2dd808-71e6-4ab5-99a6-e075c83dbcb0","Type":"ContainerDied","Data":"c5b7e0fd031b26526f6d7a239811cf1b6d434ad2bbf6fbbd6ca4a593b6be76ab"} Mar 21 04:48:04 crc kubenswrapper[4923]: I0321 04:48:04.705904 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5b7e0fd031b26526f6d7a239811cf1b6d434ad2bbf6fbbd6ca4a593b6be76ab" Mar 21 04:48:04 crc kubenswrapper[4923]: I0321 04:48:04.705963 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567808-d2fbf" Mar 21 04:48:04 crc kubenswrapper[4923]: I0321 04:48:04.996839 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567802-9j79r"] Mar 21 04:48:05 crc kubenswrapper[4923]: I0321 04:48:05.004844 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567802-9j79r"] Mar 21 04:48:06 crc kubenswrapper[4923]: I0321 04:48:06.371797 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f597ce65-244c-4e18-a184-65ddba7e483d" path="/var/lib/kubelet/pods/f597ce65-244c-4e18-a184-65ddba7e483d/volumes" Mar 21 04:48:14 crc kubenswrapper[4923]: I0321 04:48:14.358592 4923 scope.go:117] "RemoveContainer" containerID="1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04" Mar 21 04:48:14 crc kubenswrapper[4923]: I0321 04:48:14.800083 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" event={"ID":"34cdf206-b121-415c-ae40-21245192e724","Type":"ContainerStarted","Data":"83cd8e32365a453dcfdb5a57011f0d61d2d4a1d8ad826dd8bdaa91c0f8253fda"} Mar 21 04:48:19 crc kubenswrapper[4923]: I0321 04:48:19.126810 4923 scope.go:117] "RemoveContainer" containerID="2a38a10b56e9b3ae005e4e75c0d35b99d2554795300e2274c96e6b223e0b0db3" Mar 21 04:49:02 crc kubenswrapper[4923]: I0321 04:49:02.344377 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8xnq9"] Mar 21 04:49:02 crc kubenswrapper[4923]: E0321 04:49:02.345078 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b2dd808-71e6-4ab5-99a6-e075c83dbcb0" containerName="oc" Mar 21 04:49:02 crc kubenswrapper[4923]: I0321 04:49:02.345090 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b2dd808-71e6-4ab5-99a6-e075c83dbcb0" containerName="oc" Mar 21 04:49:02 crc kubenswrapper[4923]: I0321 04:49:02.345213 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b2dd808-71e6-4ab5-99a6-e075c83dbcb0" containerName="oc" Mar 21 04:49:02 crc kubenswrapper[4923]: I0321 04:49:02.345989 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xnq9" Mar 21 04:49:02 crc kubenswrapper[4923]: I0321 04:49:02.364475 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8xnq9"] Mar 21 04:49:02 crc kubenswrapper[4923]: I0321 04:49:02.492813 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aef59204-b573-40c0-a01a-d29d24f11e2d-catalog-content\") pod \"certified-operators-8xnq9\" (UID: \"aef59204-b573-40c0-a01a-d29d24f11e2d\") " pod="openshift-marketplace/certified-operators-8xnq9" Mar 21 04:49:02 crc kubenswrapper[4923]: I0321 04:49:02.493197 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp5t2\" (UniqueName: \"kubernetes.io/projected/aef59204-b573-40c0-a01a-d29d24f11e2d-kube-api-access-fp5t2\") pod \"certified-operators-8xnq9\" (UID: \"aef59204-b573-40c0-a01a-d29d24f11e2d\") " pod="openshift-marketplace/certified-operators-8xnq9" Mar 21 04:49:02 crc kubenswrapper[4923]: I0321 04:49:02.493402 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aef59204-b573-40c0-a01a-d29d24f11e2d-utilities\") pod \"certified-operators-8xnq9\" (UID: \"aef59204-b573-40c0-a01a-d29d24f11e2d\") " pod="openshift-marketplace/certified-operators-8xnq9" Mar 21 04:49:02 crc kubenswrapper[4923]: I0321 04:49:02.594717 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aef59204-b573-40c0-a01a-d29d24f11e2d-catalog-content\") pod \"certified-operators-8xnq9\" (UID: \"aef59204-b573-40c0-a01a-d29d24f11e2d\") " pod="openshift-marketplace/certified-operators-8xnq9" Mar 21 04:49:02 crc kubenswrapper[4923]: I0321 04:49:02.594788 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp5t2\" (UniqueName: \"kubernetes.io/projected/aef59204-b573-40c0-a01a-d29d24f11e2d-kube-api-access-fp5t2\") pod \"certified-operators-8xnq9\" (UID: \"aef59204-b573-40c0-a01a-d29d24f11e2d\") " pod="openshift-marketplace/certified-operators-8xnq9" Mar 21 04:49:02 crc kubenswrapper[4923]: I0321 04:49:02.594815 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aef59204-b573-40c0-a01a-d29d24f11e2d-utilities\") pod \"certified-operators-8xnq9\" (UID: \"aef59204-b573-40c0-a01a-d29d24f11e2d\") " pod="openshift-marketplace/certified-operators-8xnq9" Mar 21 04:49:02 crc kubenswrapper[4923]: I0321 04:49:02.595375 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aef59204-b573-40c0-a01a-d29d24f11e2d-utilities\") pod \"certified-operators-8xnq9\" (UID: \"aef59204-b573-40c0-a01a-d29d24f11e2d\") " pod="openshift-marketplace/certified-operators-8xnq9" Mar 21 04:49:02 crc kubenswrapper[4923]: I0321 04:49:02.595787 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aef59204-b573-40c0-a01a-d29d24f11e2d-catalog-content\") pod \"certified-operators-8xnq9\" (UID: \"aef59204-b573-40c0-a01a-d29d24f11e2d\") " pod="openshift-marketplace/certified-operators-8xnq9" Mar 21 04:49:02 crc kubenswrapper[4923]: I0321 04:49:02.621417 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp5t2\" (UniqueName: \"kubernetes.io/projected/aef59204-b573-40c0-a01a-d29d24f11e2d-kube-api-access-fp5t2\") pod \"certified-operators-8xnq9\" (UID: \"aef59204-b573-40c0-a01a-d29d24f11e2d\") " pod="openshift-marketplace/certified-operators-8xnq9" Mar 21 04:49:02 crc kubenswrapper[4923]: I0321 04:49:02.666140 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xnq9" Mar 21 04:49:02 crc kubenswrapper[4923]: I0321 04:49:02.872651 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8xnq9"] Mar 21 04:49:03 crc kubenswrapper[4923]: I0321 04:49:03.320337 4923 generic.go:334] "Generic (PLEG): container finished" podID="aef59204-b573-40c0-a01a-d29d24f11e2d" containerID="bd0abf357431af93bd48aa7ef0aade233cc36ca98e7fc82be5813733125da872" exitCode=0 Mar 21 04:49:03 crc kubenswrapper[4923]: I0321 04:49:03.320768 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xnq9" event={"ID":"aef59204-b573-40c0-a01a-d29d24f11e2d","Type":"ContainerDied","Data":"bd0abf357431af93bd48aa7ef0aade233cc36ca98e7fc82be5813733125da872"} Mar 21 04:49:03 crc kubenswrapper[4923]: I0321 04:49:03.320837 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xnq9" event={"ID":"aef59204-b573-40c0-a01a-d29d24f11e2d","Type":"ContainerStarted","Data":"e56e1c5e4914477662ef9197152ae7b2bb1afb520ad31bfd9d57e67b32634b7e"} Mar 21 04:49:03 crc kubenswrapper[4923]: I0321 04:49:03.322957 4923 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 21 04:49:03 crc kubenswrapper[4923]: I0321 04:49:03.944181 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r5cr4"] Mar 21 04:49:03 crc kubenswrapper[4923]: I0321 04:49:03.945779 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r5cr4" Mar 21 04:49:03 crc kubenswrapper[4923]: I0321 04:49:03.958274 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r5cr4"] Mar 21 04:49:04 crc kubenswrapper[4923]: I0321 04:49:04.010987 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d030a26-8f40-4736-9420-c899c1b8b6ec-utilities\") pod \"community-operators-r5cr4\" (UID: \"3d030a26-8f40-4736-9420-c899c1b8b6ec\") " pod="openshift-marketplace/community-operators-r5cr4" Mar 21 04:49:04 crc kubenswrapper[4923]: I0321 04:49:04.011046 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d030a26-8f40-4736-9420-c899c1b8b6ec-catalog-content\") pod \"community-operators-r5cr4\" (UID: \"3d030a26-8f40-4736-9420-c899c1b8b6ec\") " pod="openshift-marketplace/community-operators-r5cr4" Mar 21 04:49:04 crc kubenswrapper[4923]: I0321 04:49:04.011085 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktmn4\" (UniqueName: \"kubernetes.io/projected/3d030a26-8f40-4736-9420-c899c1b8b6ec-kube-api-access-ktmn4\") pod \"community-operators-r5cr4\" (UID: \"3d030a26-8f40-4736-9420-c899c1b8b6ec\") " pod="openshift-marketplace/community-operators-r5cr4" Mar 21 04:49:04 crc kubenswrapper[4923]: I0321 04:49:04.111964 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d030a26-8f40-4736-9420-c899c1b8b6ec-catalog-content\") pod \"community-operators-r5cr4\" (UID: \"3d030a26-8f40-4736-9420-c899c1b8b6ec\") " pod="openshift-marketplace/community-operators-r5cr4" Mar 21 04:49:04 crc kubenswrapper[4923]: I0321 04:49:04.112080 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktmn4\" (UniqueName: \"kubernetes.io/projected/3d030a26-8f40-4736-9420-c899c1b8b6ec-kube-api-access-ktmn4\") pod \"community-operators-r5cr4\" (UID: \"3d030a26-8f40-4736-9420-c899c1b8b6ec\") " pod="openshift-marketplace/community-operators-r5cr4" Mar 21 04:49:04 crc kubenswrapper[4923]: I0321 04:49:04.112143 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d030a26-8f40-4736-9420-c899c1b8b6ec-utilities\") pod \"community-operators-r5cr4\" (UID: \"3d030a26-8f40-4736-9420-c899c1b8b6ec\") " pod="openshift-marketplace/community-operators-r5cr4" Mar 21 04:49:04 crc kubenswrapper[4923]: I0321 04:49:04.112733 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d030a26-8f40-4736-9420-c899c1b8b6ec-catalog-content\") pod \"community-operators-r5cr4\" (UID: \"3d030a26-8f40-4736-9420-c899c1b8b6ec\") " pod="openshift-marketplace/community-operators-r5cr4" Mar 21 04:49:04 crc kubenswrapper[4923]: I0321 04:49:04.113128 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d030a26-8f40-4736-9420-c899c1b8b6ec-utilities\") pod \"community-operators-r5cr4\" (UID: \"3d030a26-8f40-4736-9420-c899c1b8b6ec\") " pod="openshift-marketplace/community-operators-r5cr4" Mar 21 04:49:04 crc kubenswrapper[4923]: I0321 04:49:04.132470 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktmn4\" (UniqueName: \"kubernetes.io/projected/3d030a26-8f40-4736-9420-c899c1b8b6ec-kube-api-access-ktmn4\") pod \"community-operators-r5cr4\" (UID: \"3d030a26-8f40-4736-9420-c899c1b8b6ec\") " pod="openshift-marketplace/community-operators-r5cr4" Mar 21 04:49:04 crc kubenswrapper[4923]: I0321 04:49:04.266153 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r5cr4" Mar 21 04:49:04 crc kubenswrapper[4923]: I0321 04:49:04.338212 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xnq9" event={"ID":"aef59204-b573-40c0-a01a-d29d24f11e2d","Type":"ContainerStarted","Data":"08a978142c133c56464d9e481921b53e9f3662c8bce50f78997a4e17d701e850"} Mar 21 04:49:04 crc kubenswrapper[4923]: I0321 04:49:04.552771 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r5cr4"] Mar 21 04:49:04 crc kubenswrapper[4923]: W0321 04:49:04.557691 4923 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d030a26_8f40_4736_9420_c899c1b8b6ec.slice/crio-e0aa6ce14181ef8866c186178ea104cc30b648e42cc527a04b909eeee309186e WatchSource:0}: Error finding container e0aa6ce14181ef8866c186178ea104cc30b648e42cc527a04b909eeee309186e: Status 404 returned error can't find the container with id e0aa6ce14181ef8866c186178ea104cc30b648e42cc527a04b909eeee309186e Mar 21 04:49:05 crc kubenswrapper[4923]: I0321 04:49:05.362061 4923 generic.go:334] "Generic (PLEG): container finished" podID="3d030a26-8f40-4736-9420-c899c1b8b6ec" containerID="bfc923e0952604250adc9a1e22be046021f1db520f0e3a769cf82459e7239fc9" exitCode=0 Mar 21 04:49:05 crc kubenswrapper[4923]: I0321 04:49:05.362157 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5cr4" event={"ID":"3d030a26-8f40-4736-9420-c899c1b8b6ec","Type":"ContainerDied","Data":"bfc923e0952604250adc9a1e22be046021f1db520f0e3a769cf82459e7239fc9"} Mar 21 04:49:05 crc kubenswrapper[4923]: I0321 04:49:05.362197 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5cr4" event={"ID":"3d030a26-8f40-4736-9420-c899c1b8b6ec","Type":"ContainerStarted","Data":"e0aa6ce14181ef8866c186178ea104cc30b648e42cc527a04b909eeee309186e"} Mar 21 04:49:05 crc kubenswrapper[4923]: I0321 04:49:05.366756 4923 generic.go:334] "Generic (PLEG): container finished" podID="aef59204-b573-40c0-a01a-d29d24f11e2d" containerID="08a978142c133c56464d9e481921b53e9f3662c8bce50f78997a4e17d701e850" exitCode=0 Mar 21 04:49:05 crc kubenswrapper[4923]: I0321 04:49:05.366809 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xnq9" event={"ID":"aef59204-b573-40c0-a01a-d29d24f11e2d","Type":"ContainerDied","Data":"08a978142c133c56464d9e481921b53e9f3662c8bce50f78997a4e17d701e850"} Mar 21 04:49:06 crc kubenswrapper[4923]: I0321 04:49:06.376959 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xnq9" event={"ID":"aef59204-b573-40c0-a01a-d29d24f11e2d","Type":"ContainerStarted","Data":"4f4ca0ed9ed7dcd8c0df2e686705aa497d78f5311cc7ac3dc6a3281c9c33e8af"} Mar 21 04:49:06 crc kubenswrapper[4923]: I0321 04:49:06.379820 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5cr4" event={"ID":"3d030a26-8f40-4736-9420-c899c1b8b6ec","Type":"ContainerStarted","Data":"1c946a3fd5d83f73fa77dda294a1f9ff59a8d1c4c47fc9296bbb4440551eac12"} Mar 21 04:49:06 crc kubenswrapper[4923]: I0321 04:49:06.401130 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8xnq9" podStartSLOduration=1.993241704 podStartE2EDuration="4.401112917s" podCreationTimestamp="2026-03-21 04:49:02 +0000 UTC" firstStartedPulling="2026-03-21 04:49:03.322740502 +0000 UTC m=+1908.475751589" lastFinishedPulling="2026-03-21 04:49:05.730611705 +0000 UTC m=+1910.883622802" observedRunningTime="2026-03-21 04:49:06.395441846 +0000 UTC m=+1911.548452993" watchObservedRunningTime="2026-03-21 04:49:06.401112917 +0000 UTC m=+1911.554124014" Mar 21 04:49:07 crc kubenswrapper[4923]: I0321 04:49:07.387857 4923 generic.go:334] "Generic (PLEG): container finished" podID="3d030a26-8f40-4736-9420-c899c1b8b6ec" containerID="1c946a3fd5d83f73fa77dda294a1f9ff59a8d1c4c47fc9296bbb4440551eac12" exitCode=0 Mar 21 04:49:07 crc kubenswrapper[4923]: I0321 04:49:07.387906 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5cr4" event={"ID":"3d030a26-8f40-4736-9420-c899c1b8b6ec","Type":"ContainerDied","Data":"1c946a3fd5d83f73fa77dda294a1f9ff59a8d1c4c47fc9296bbb4440551eac12"} Mar 21 04:49:08 crc kubenswrapper[4923]: I0321 04:49:08.398017 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5cr4" event={"ID":"3d030a26-8f40-4736-9420-c899c1b8b6ec","Type":"ContainerStarted","Data":"88b5fd5e18a60bed86f51b401aa08b11e7b0439b03671f067bfbbfde214c76cf"} Mar 21 04:49:08 crc kubenswrapper[4923]: I0321 04:49:08.431776 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r5cr4" podStartSLOduration=2.765212663 podStartE2EDuration="5.431758666s" podCreationTimestamp="2026-03-21 04:49:03 +0000 UTC" firstStartedPulling="2026-03-21 04:49:05.363868687 +0000 UTC m=+1910.516879804" lastFinishedPulling="2026-03-21 04:49:08.03041469 +0000 UTC m=+1913.183425807" observedRunningTime="2026-03-21 04:49:08.428034701 +0000 UTC m=+1913.581045848" watchObservedRunningTime="2026-03-21 04:49:08.431758666 +0000 UTC m=+1913.584769763" Mar 21 04:49:12 crc kubenswrapper[4923]: I0321 04:49:12.667374 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8xnq9" Mar 21 04:49:12 crc kubenswrapper[4923]: I0321 04:49:12.667830 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8xnq9" Mar 21 04:49:12 crc kubenswrapper[4923]: I0321 04:49:12.741978 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8xnq9" Mar 21 04:49:13 crc kubenswrapper[4923]: I0321 04:49:13.507629 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8xnq9" Mar 21 04:49:13 crc kubenswrapper[4923]: I0321 04:49:13.579766 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8xnq9"] Mar 21 04:49:14 crc kubenswrapper[4923]: I0321 04:49:14.266454 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r5cr4" Mar 21 04:49:14 crc kubenswrapper[4923]: I0321 04:49:14.266525 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r5cr4" Mar 21 04:49:14 crc kubenswrapper[4923]: I0321 04:49:14.328233 4923 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r5cr4" Mar 21 04:49:14 crc kubenswrapper[4923]: I0321 04:49:14.493578 4923 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r5cr4" Mar 21 04:49:15 crc kubenswrapper[4923]: I0321 04:49:15.448452 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8xnq9" podUID="aef59204-b573-40c0-a01a-d29d24f11e2d" containerName="registry-server" containerID="cri-o://4f4ca0ed9ed7dcd8c0df2e686705aa497d78f5311cc7ac3dc6a3281c9c33e8af" gracePeriod=2 Mar 21 04:49:15 crc kubenswrapper[4923]: I0321 04:49:15.743647 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r5cr4"] Mar 21 04:49:16 crc kubenswrapper[4923]: I0321 04:49:16.021963 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xnq9" Mar 21 04:49:16 crc kubenswrapper[4923]: I0321 04:49:16.083763 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aef59204-b573-40c0-a01a-d29d24f11e2d-utilities\") pod \"aef59204-b573-40c0-a01a-d29d24f11e2d\" (UID: \"aef59204-b573-40c0-a01a-d29d24f11e2d\") " Mar 21 04:49:16 crc kubenswrapper[4923]: I0321 04:49:16.085637 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aef59204-b573-40c0-a01a-d29d24f11e2d-utilities" (OuterVolumeSpecName: "utilities") pod "aef59204-b573-40c0-a01a-d29d24f11e2d" (UID: "aef59204-b573-40c0-a01a-d29d24f11e2d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:49:16 crc kubenswrapper[4923]: I0321 04:49:16.184952 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aef59204-b573-40c0-a01a-d29d24f11e2d-catalog-content\") pod \"aef59204-b573-40c0-a01a-d29d24f11e2d\" (UID: \"aef59204-b573-40c0-a01a-d29d24f11e2d\") " Mar 21 04:49:16 crc kubenswrapper[4923]: I0321 04:49:16.185017 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp5t2\" (UniqueName: \"kubernetes.io/projected/aef59204-b573-40c0-a01a-d29d24f11e2d-kube-api-access-fp5t2\") pod \"aef59204-b573-40c0-a01a-d29d24f11e2d\" (UID: \"aef59204-b573-40c0-a01a-d29d24f11e2d\") " Mar 21 04:49:16 crc kubenswrapper[4923]: I0321 04:49:16.185506 4923 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aef59204-b573-40c0-a01a-d29d24f11e2d-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 04:49:16 crc kubenswrapper[4923]: I0321 04:49:16.194172 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aef59204-b573-40c0-a01a-d29d24f11e2d-kube-api-access-fp5t2" (OuterVolumeSpecName: "kube-api-access-fp5t2") pod "aef59204-b573-40c0-a01a-d29d24f11e2d" (UID: "aef59204-b573-40c0-a01a-d29d24f11e2d"). InnerVolumeSpecName "kube-api-access-fp5t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:49:16 crc kubenswrapper[4923]: I0321 04:49:16.279841 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aef59204-b573-40c0-a01a-d29d24f11e2d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aef59204-b573-40c0-a01a-d29d24f11e2d" (UID: "aef59204-b573-40c0-a01a-d29d24f11e2d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:49:16 crc kubenswrapper[4923]: I0321 04:49:16.286846 4923 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aef59204-b573-40c0-a01a-d29d24f11e2d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 04:49:16 crc kubenswrapper[4923]: I0321 04:49:16.286899 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp5t2\" (UniqueName: \"kubernetes.io/projected/aef59204-b573-40c0-a01a-d29d24f11e2d-kube-api-access-fp5t2\") on node \"crc\" DevicePath \"\"" Mar 21 04:49:16 crc kubenswrapper[4923]: I0321 04:49:16.461645 4923 generic.go:334] "Generic (PLEG): container finished" podID="aef59204-b573-40c0-a01a-d29d24f11e2d" containerID="4f4ca0ed9ed7dcd8c0df2e686705aa497d78f5311cc7ac3dc6a3281c9c33e8af" exitCode=0 Mar 21 04:49:16 crc kubenswrapper[4923]: I0321 04:49:16.461713 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xnq9" event={"ID":"aef59204-b573-40c0-a01a-d29d24f11e2d","Type":"ContainerDied","Data":"4f4ca0ed9ed7dcd8c0df2e686705aa497d78f5311cc7ac3dc6a3281c9c33e8af"} Mar 21 04:49:16 crc kubenswrapper[4923]: I0321 04:49:16.461767 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xnq9" Mar 21 04:49:16 crc kubenswrapper[4923]: I0321 04:49:16.461789 4923 scope.go:117] "RemoveContainer" containerID="4f4ca0ed9ed7dcd8c0df2e686705aa497d78f5311cc7ac3dc6a3281c9c33e8af" Mar 21 04:49:16 crc kubenswrapper[4923]: I0321 04:49:16.461772 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xnq9" event={"ID":"aef59204-b573-40c0-a01a-d29d24f11e2d","Type":"ContainerDied","Data":"e56e1c5e4914477662ef9197152ae7b2bb1afb520ad31bfd9d57e67b32634b7e"} Mar 21 04:49:16 crc kubenswrapper[4923]: I0321 04:49:16.461943 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r5cr4" podUID="3d030a26-8f40-4736-9420-c899c1b8b6ec" containerName="registry-server" containerID="cri-o://88b5fd5e18a60bed86f51b401aa08b11e7b0439b03671f067bfbbfde214c76cf" gracePeriod=2 Mar 21 04:49:16 crc kubenswrapper[4923]: I0321 04:49:16.496767 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8xnq9"] Mar 21 04:49:16 crc kubenswrapper[4923]: I0321 04:49:16.497973 4923 scope.go:117] "RemoveContainer" containerID="08a978142c133c56464d9e481921b53e9f3662c8bce50f78997a4e17d701e850" Mar 21 04:49:16 crc kubenswrapper[4923]: I0321 04:49:16.502105 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8xnq9"] Mar 21 04:49:16 crc kubenswrapper[4923]: I0321 04:49:16.523038 4923 scope.go:117] "RemoveContainer" containerID="bd0abf357431af93bd48aa7ef0aade233cc36ca98e7fc82be5813733125da872" Mar 21 04:49:16 crc kubenswrapper[4923]: I0321 04:49:16.616631 4923 scope.go:117] "RemoveContainer" containerID="4f4ca0ed9ed7dcd8c0df2e686705aa497d78f5311cc7ac3dc6a3281c9c33e8af" Mar 21 04:49:16 crc kubenswrapper[4923]: E0321 04:49:16.617159 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f4ca0ed9ed7dcd8c0df2e686705aa497d78f5311cc7ac3dc6a3281c9c33e8af\": container with ID starting with 4f4ca0ed9ed7dcd8c0df2e686705aa497d78f5311cc7ac3dc6a3281c9c33e8af not found: ID does not exist" containerID="4f4ca0ed9ed7dcd8c0df2e686705aa497d78f5311cc7ac3dc6a3281c9c33e8af" Mar 21 04:49:16 crc kubenswrapper[4923]: I0321 04:49:16.617203 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f4ca0ed9ed7dcd8c0df2e686705aa497d78f5311cc7ac3dc6a3281c9c33e8af"} err="failed to get container status \"4f4ca0ed9ed7dcd8c0df2e686705aa497d78f5311cc7ac3dc6a3281c9c33e8af\": rpc error: code = NotFound desc = could not find container \"4f4ca0ed9ed7dcd8c0df2e686705aa497d78f5311cc7ac3dc6a3281c9c33e8af\": container with ID starting with 4f4ca0ed9ed7dcd8c0df2e686705aa497d78f5311cc7ac3dc6a3281c9c33e8af not found: ID does not exist" Mar 21 04:49:16 crc kubenswrapper[4923]: I0321 04:49:16.617233 4923 scope.go:117] "RemoveContainer" containerID="08a978142c133c56464d9e481921b53e9f3662c8bce50f78997a4e17d701e850" Mar 21 04:49:16 crc kubenswrapper[4923]: E0321 04:49:16.617808 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08a978142c133c56464d9e481921b53e9f3662c8bce50f78997a4e17d701e850\": container with ID starting with 08a978142c133c56464d9e481921b53e9f3662c8bce50f78997a4e17d701e850 not found: ID does not exist" containerID="08a978142c133c56464d9e481921b53e9f3662c8bce50f78997a4e17d701e850" Mar 21 04:49:16 crc kubenswrapper[4923]: I0321 04:49:16.617868 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08a978142c133c56464d9e481921b53e9f3662c8bce50f78997a4e17d701e850"} err="failed to get container status \"08a978142c133c56464d9e481921b53e9f3662c8bce50f78997a4e17d701e850\": rpc error: code = NotFound desc = could not find container \"08a978142c133c56464d9e481921b53e9f3662c8bce50f78997a4e17d701e850\": container with ID starting with 08a978142c133c56464d9e481921b53e9f3662c8bce50f78997a4e17d701e850 not found: ID does not exist" Mar 21 04:49:16 crc kubenswrapper[4923]: I0321 04:49:16.617914 4923 scope.go:117] "RemoveContainer" containerID="bd0abf357431af93bd48aa7ef0aade233cc36ca98e7fc82be5813733125da872" Mar 21 04:49:16 crc kubenswrapper[4923]: E0321 04:49:16.618385 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd0abf357431af93bd48aa7ef0aade233cc36ca98e7fc82be5813733125da872\": container with ID starting with bd0abf357431af93bd48aa7ef0aade233cc36ca98e7fc82be5813733125da872 not found: ID does not exist" containerID="bd0abf357431af93bd48aa7ef0aade233cc36ca98e7fc82be5813733125da872" Mar 21 04:49:16 crc kubenswrapper[4923]: I0321 04:49:16.618437 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd0abf357431af93bd48aa7ef0aade233cc36ca98e7fc82be5813733125da872"} err="failed to get container status \"bd0abf357431af93bd48aa7ef0aade233cc36ca98e7fc82be5813733125da872\": rpc error: code = NotFound desc = could not find container \"bd0abf357431af93bd48aa7ef0aade233cc36ca98e7fc82be5813733125da872\": container with ID starting with bd0abf357431af93bd48aa7ef0aade233cc36ca98e7fc82be5813733125da872 not found: ID does not exist" Mar 21 04:49:17 crc kubenswrapper[4923]: I0321 04:49:17.082059 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r5cr4" Mar 21 04:49:17 crc kubenswrapper[4923]: I0321 04:49:17.198918 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d030a26-8f40-4736-9420-c899c1b8b6ec-utilities\") pod \"3d030a26-8f40-4736-9420-c899c1b8b6ec\" (UID: \"3d030a26-8f40-4736-9420-c899c1b8b6ec\") " Mar 21 04:49:17 crc kubenswrapper[4923]: I0321 04:49:17.199221 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktmn4\" (UniqueName: \"kubernetes.io/projected/3d030a26-8f40-4736-9420-c899c1b8b6ec-kube-api-access-ktmn4\") pod \"3d030a26-8f40-4736-9420-c899c1b8b6ec\" (UID: \"3d030a26-8f40-4736-9420-c899c1b8b6ec\") " Mar 21 04:49:17 crc kubenswrapper[4923]: I0321 04:49:17.199293 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d030a26-8f40-4736-9420-c899c1b8b6ec-catalog-content\") pod \"3d030a26-8f40-4736-9420-c899c1b8b6ec\" (UID: \"3d030a26-8f40-4736-9420-c899c1b8b6ec\") " Mar 21 04:49:17 crc kubenswrapper[4923]: I0321 04:49:17.200904 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d030a26-8f40-4736-9420-c899c1b8b6ec-utilities" (OuterVolumeSpecName: "utilities") pod "3d030a26-8f40-4736-9420-c899c1b8b6ec" (UID: "3d030a26-8f40-4736-9420-c899c1b8b6ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:49:17 crc kubenswrapper[4923]: I0321 04:49:17.205720 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d030a26-8f40-4736-9420-c899c1b8b6ec-kube-api-access-ktmn4" (OuterVolumeSpecName: "kube-api-access-ktmn4") pod "3d030a26-8f40-4736-9420-c899c1b8b6ec" (UID: "3d030a26-8f40-4736-9420-c899c1b8b6ec"). InnerVolumeSpecName "kube-api-access-ktmn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:49:17 crc kubenswrapper[4923]: I0321 04:49:17.312238 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktmn4\" (UniqueName: \"kubernetes.io/projected/3d030a26-8f40-4736-9420-c899c1b8b6ec-kube-api-access-ktmn4\") on node \"crc\" DevicePath \"\"" Mar 21 04:49:17 crc kubenswrapper[4923]: I0321 04:49:17.312300 4923 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d030a26-8f40-4736-9420-c899c1b8b6ec-utilities\") on node \"crc\" DevicePath \"\"" Mar 21 04:49:17 crc kubenswrapper[4923]: I0321 04:49:17.421566 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d030a26-8f40-4736-9420-c899c1b8b6ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d030a26-8f40-4736-9420-c899c1b8b6ec" (UID: "3d030a26-8f40-4736-9420-c899c1b8b6ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 21 04:49:17 crc kubenswrapper[4923]: I0321 04:49:17.472834 4923 generic.go:334] "Generic (PLEG): container finished" podID="3d030a26-8f40-4736-9420-c899c1b8b6ec" containerID="88b5fd5e18a60bed86f51b401aa08b11e7b0439b03671f067bfbbfde214c76cf" exitCode=0 Mar 21 04:49:17 crc kubenswrapper[4923]: I0321 04:49:17.472911 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5cr4" event={"ID":"3d030a26-8f40-4736-9420-c899c1b8b6ec","Type":"ContainerDied","Data":"88b5fd5e18a60bed86f51b401aa08b11e7b0439b03671f067bfbbfde214c76cf"} Mar 21 04:49:17 crc kubenswrapper[4923]: I0321 04:49:17.472950 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r5cr4" Mar 21 04:49:17 crc kubenswrapper[4923]: I0321 04:49:17.472969 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5cr4" event={"ID":"3d030a26-8f40-4736-9420-c899c1b8b6ec","Type":"ContainerDied","Data":"e0aa6ce14181ef8866c186178ea104cc30b648e42cc527a04b909eeee309186e"} Mar 21 04:49:17 crc kubenswrapper[4923]: I0321 04:49:17.472991 4923 scope.go:117] "RemoveContainer" containerID="88b5fd5e18a60bed86f51b401aa08b11e7b0439b03671f067bfbbfde214c76cf" Mar 21 04:49:17 crc kubenswrapper[4923]: I0321 04:49:17.496584 4923 scope.go:117] "RemoveContainer" containerID="1c946a3fd5d83f73fa77dda294a1f9ff59a8d1c4c47fc9296bbb4440551eac12" Mar 21 04:49:17 crc kubenswrapper[4923]: I0321 04:49:17.515074 4923 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d030a26-8f40-4736-9420-c899c1b8b6ec-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 21 04:49:17 crc kubenswrapper[4923]: I0321 04:49:17.524160 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r5cr4"] Mar 21 04:49:17 crc kubenswrapper[4923]: I0321 04:49:17.535407 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r5cr4"] Mar 21 04:49:17 crc kubenswrapper[4923]: I0321 04:49:17.549965 4923 scope.go:117] "RemoveContainer" containerID="bfc923e0952604250adc9a1e22be046021f1db520f0e3a769cf82459e7239fc9" Mar 21 04:49:17 crc kubenswrapper[4923]: I0321 04:49:17.569402 4923 scope.go:117] "RemoveContainer" containerID="88b5fd5e18a60bed86f51b401aa08b11e7b0439b03671f067bfbbfde214c76cf" Mar 21 04:49:17 crc kubenswrapper[4923]: E0321 04:49:17.570002 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88b5fd5e18a60bed86f51b401aa08b11e7b0439b03671f067bfbbfde214c76cf\": container with ID starting with 88b5fd5e18a60bed86f51b401aa08b11e7b0439b03671f067bfbbfde214c76cf not found: ID does not exist" containerID="88b5fd5e18a60bed86f51b401aa08b11e7b0439b03671f067bfbbfde214c76cf" Mar 21 04:49:17 crc kubenswrapper[4923]: I0321 04:49:17.570042 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b5fd5e18a60bed86f51b401aa08b11e7b0439b03671f067bfbbfde214c76cf"} err="failed to get container status \"88b5fd5e18a60bed86f51b401aa08b11e7b0439b03671f067bfbbfde214c76cf\": rpc error: code = NotFound desc = could not find container \"88b5fd5e18a60bed86f51b401aa08b11e7b0439b03671f067bfbbfde214c76cf\": container with ID starting with 88b5fd5e18a60bed86f51b401aa08b11e7b0439b03671f067bfbbfde214c76cf not found: ID does not exist" Mar 21 04:49:17 crc kubenswrapper[4923]: I0321 04:49:17.570071 4923 scope.go:117] "RemoveContainer" containerID="1c946a3fd5d83f73fa77dda294a1f9ff59a8d1c4c47fc9296bbb4440551eac12" Mar 21 04:49:17 crc kubenswrapper[4923]: E0321 04:49:17.570805 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c946a3fd5d83f73fa77dda294a1f9ff59a8d1c4c47fc9296bbb4440551eac12\": container with ID starting with 1c946a3fd5d83f73fa77dda294a1f9ff59a8d1c4c47fc9296bbb4440551eac12 not found: ID does not exist" containerID="1c946a3fd5d83f73fa77dda294a1f9ff59a8d1c4c47fc9296bbb4440551eac12" Mar 21 04:49:17 crc kubenswrapper[4923]: I0321 04:49:17.570836 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c946a3fd5d83f73fa77dda294a1f9ff59a8d1c4c47fc9296bbb4440551eac12"} err="failed to get container status \"1c946a3fd5d83f73fa77dda294a1f9ff59a8d1c4c47fc9296bbb4440551eac12\": rpc error: code = NotFound desc = could not find container \"1c946a3fd5d83f73fa77dda294a1f9ff59a8d1c4c47fc9296bbb4440551eac12\": container with ID starting with 1c946a3fd5d83f73fa77dda294a1f9ff59a8d1c4c47fc9296bbb4440551eac12 not found: ID does not exist" Mar 21 04:49:17 crc kubenswrapper[4923]: I0321 04:49:17.570857 4923 scope.go:117] "RemoveContainer" containerID="bfc923e0952604250adc9a1e22be046021f1db520f0e3a769cf82459e7239fc9" Mar 21 04:49:17 crc kubenswrapper[4923]: E0321 04:49:17.571281 4923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfc923e0952604250adc9a1e22be046021f1db520f0e3a769cf82459e7239fc9\": container with ID starting with bfc923e0952604250adc9a1e22be046021f1db520f0e3a769cf82459e7239fc9 not found: ID does not exist" containerID="bfc923e0952604250adc9a1e22be046021f1db520f0e3a769cf82459e7239fc9" Mar 21 04:49:17 crc kubenswrapper[4923]: I0321 04:49:17.571356 4923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfc923e0952604250adc9a1e22be046021f1db520f0e3a769cf82459e7239fc9"} err="failed to get container status \"bfc923e0952604250adc9a1e22be046021f1db520f0e3a769cf82459e7239fc9\": rpc error: code = NotFound desc = could not find container \"bfc923e0952604250adc9a1e22be046021f1db520f0e3a769cf82459e7239fc9\": container with ID starting with bfc923e0952604250adc9a1e22be046021f1db520f0e3a769cf82459e7239fc9 not found: ID does not exist" Mar 21 04:49:18 crc kubenswrapper[4923]: I0321 04:49:18.376000 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d030a26-8f40-4736-9420-c899c1b8b6ec" path="/var/lib/kubelet/pods/3d030a26-8f40-4736-9420-c899c1b8b6ec/volumes" Mar 21 04:49:18 crc kubenswrapper[4923]: I0321 04:49:18.377090 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aef59204-b573-40c0-a01a-d29d24f11e2d" path="/var/lib/kubelet/pods/aef59204-b573-40c0-a01a-d29d24f11e2d/volumes" Mar 21 04:50:00 crc kubenswrapper[4923]: I0321 04:50:00.153947 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29567810-hs4gq"] Mar 21 04:50:00 crc kubenswrapper[4923]: E0321 04:50:00.155046 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aef59204-b573-40c0-a01a-d29d24f11e2d" containerName="extract-utilities" Mar 21 04:50:00 crc kubenswrapper[4923]: I0321 04:50:00.155078 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="aef59204-b573-40c0-a01a-d29d24f11e2d" containerName="extract-utilities" Mar 21 04:50:00 crc kubenswrapper[4923]: E0321 04:50:00.155107 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d030a26-8f40-4736-9420-c899c1b8b6ec" containerName="registry-server" Mar 21 04:50:00 crc kubenswrapper[4923]: I0321 04:50:00.155123 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d030a26-8f40-4736-9420-c899c1b8b6ec" containerName="registry-server" Mar 21 04:50:00 crc kubenswrapper[4923]: E0321 04:50:00.155381 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aef59204-b573-40c0-a01a-d29d24f11e2d" containerName="extract-content" Mar 21 04:50:00 crc kubenswrapper[4923]: I0321 04:50:00.155406 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="aef59204-b573-40c0-a01a-d29d24f11e2d" containerName="extract-content" Mar 21 04:50:00 crc kubenswrapper[4923]: E0321 04:50:00.155444 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d030a26-8f40-4736-9420-c899c1b8b6ec" containerName="extract-content" Mar 21 04:50:00 crc kubenswrapper[4923]: I0321 04:50:00.155465 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d030a26-8f40-4736-9420-c899c1b8b6ec" containerName="extract-content" Mar 21 04:50:00 crc kubenswrapper[4923]: E0321 04:50:00.155497 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d030a26-8f40-4736-9420-c899c1b8b6ec" containerName="extract-utilities" Mar 21 04:50:00 crc kubenswrapper[4923]: I0321 04:50:00.155513 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d030a26-8f40-4736-9420-c899c1b8b6ec" containerName="extract-utilities" Mar 21 04:50:00 crc kubenswrapper[4923]: E0321 04:50:00.155533 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aef59204-b573-40c0-a01a-d29d24f11e2d" containerName="registry-server" Mar 21 04:50:00 crc kubenswrapper[4923]: I0321 04:50:00.155551 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="aef59204-b573-40c0-a01a-d29d24f11e2d" containerName="registry-server" Mar 21 04:50:00 crc kubenswrapper[4923]: I0321 04:50:00.155817 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d030a26-8f40-4736-9420-c899c1b8b6ec" containerName="registry-server" Mar 21 04:50:00 crc kubenswrapper[4923]: I0321 04:50:00.155863 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="aef59204-b573-40c0-a01a-d29d24f11e2d" containerName="registry-server" Mar 21 04:50:00 crc kubenswrapper[4923]: I0321 04:50:00.156753 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567810-hs4gq" Mar 21 04:50:00 crc kubenswrapper[4923]: I0321 04:50:00.160448 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 21 04:50:00 crc kubenswrapper[4923]: I0321 04:50:00.160710 4923 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 21 04:50:00 crc kubenswrapper[4923]: I0321 04:50:00.161025 4923 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-8trfl" Mar 21 04:50:00 crc kubenswrapper[4923]: I0321 04:50:00.166364 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567810-hs4gq"] Mar 21 04:50:00 crc kubenswrapper[4923]: I0321 04:50:00.237645 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-966t8\" (UniqueName: \"kubernetes.io/projected/beb6bab5-42b9-417c-9417-156e78ea0f30-kube-api-access-966t8\") pod \"auto-csr-approver-29567810-hs4gq\" (UID: \"beb6bab5-42b9-417c-9417-156e78ea0f30\") " pod="openshift-infra/auto-csr-approver-29567810-hs4gq" Mar 21 04:50:00 crc kubenswrapper[4923]: I0321 04:50:00.339875 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-966t8\" (UniqueName: \"kubernetes.io/projected/beb6bab5-42b9-417c-9417-156e78ea0f30-kube-api-access-966t8\") pod \"auto-csr-approver-29567810-hs4gq\" (UID: \"beb6bab5-42b9-417c-9417-156e78ea0f30\") " pod="openshift-infra/auto-csr-approver-29567810-hs4gq" Mar 21 04:50:00 crc kubenswrapper[4923]: I0321 04:50:00.381771 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-966t8\" (UniqueName: \"kubernetes.io/projected/beb6bab5-42b9-417c-9417-156e78ea0f30-kube-api-access-966t8\") pod \"auto-csr-approver-29567810-hs4gq\" (UID: \"beb6bab5-42b9-417c-9417-156e78ea0f30\") " pod="openshift-infra/auto-csr-approver-29567810-hs4gq" Mar 21 04:50:00 crc kubenswrapper[4923]: I0321 04:50:00.483196 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567810-hs4gq" Mar 21 04:50:00 crc kubenswrapper[4923]: I0321 04:50:00.923797 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29567810-hs4gq"] Mar 21 04:50:01 crc kubenswrapper[4923]: I0321 04:50:01.801724 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567810-hs4gq" event={"ID":"beb6bab5-42b9-417c-9417-156e78ea0f30","Type":"ContainerStarted","Data":"42b675647e2ea783514950351c53518abc6ef4373046e0062a04310652e4e930"} Mar 21 04:50:03 crc kubenswrapper[4923]: I0321 04:50:03.816803 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567810-hs4gq" event={"ID":"beb6bab5-42b9-417c-9417-156e78ea0f30","Type":"ContainerStarted","Data":"79d64568ba59b9cde2592e2125300e7989f725e10a2ab0e1b6448bfb0cc290c5"} Mar 21 04:50:04 crc kubenswrapper[4923]: I0321 04:50:04.825700 4923 generic.go:334] "Generic (PLEG): container finished" podID="beb6bab5-42b9-417c-9417-156e78ea0f30" containerID="79d64568ba59b9cde2592e2125300e7989f725e10a2ab0e1b6448bfb0cc290c5" exitCode=0 Mar 21 04:50:04 crc kubenswrapper[4923]: I0321 04:50:04.825756 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567810-hs4gq" event={"ID":"beb6bab5-42b9-417c-9417-156e78ea0f30","Type":"ContainerDied","Data":"79d64568ba59b9cde2592e2125300e7989f725e10a2ab0e1b6448bfb0cc290c5"} Mar 21 04:50:05 crc kubenswrapper[4923]: I0321 04:50:05.122592 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567810-hs4gq" Mar 21 04:50:05 crc kubenswrapper[4923]: I0321 04:50:05.304922 4923 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-966t8\" (UniqueName: \"kubernetes.io/projected/beb6bab5-42b9-417c-9417-156e78ea0f30-kube-api-access-966t8\") pod \"beb6bab5-42b9-417c-9417-156e78ea0f30\" (UID: \"beb6bab5-42b9-417c-9417-156e78ea0f30\") " Mar 21 04:50:05 crc kubenswrapper[4923]: I0321 04:50:05.311754 4923 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beb6bab5-42b9-417c-9417-156e78ea0f30-kube-api-access-966t8" (OuterVolumeSpecName: "kube-api-access-966t8") pod "beb6bab5-42b9-417c-9417-156e78ea0f30" (UID: "beb6bab5-42b9-417c-9417-156e78ea0f30"). InnerVolumeSpecName "kube-api-access-966t8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 04:50:05 crc kubenswrapper[4923]: I0321 04:50:05.406972 4923 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-966t8\" (UniqueName: \"kubernetes.io/projected/beb6bab5-42b9-417c-9417-156e78ea0f30-kube-api-access-966t8\") on node \"crc\" DevicePath \"\"" Mar 21 04:50:05 crc kubenswrapper[4923]: I0321 04:50:05.836818 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29567810-hs4gq" event={"ID":"beb6bab5-42b9-417c-9417-156e78ea0f30","Type":"ContainerDied","Data":"42b675647e2ea783514950351c53518abc6ef4373046e0062a04310652e4e930"} Mar 21 04:50:05 crc kubenswrapper[4923]: I0321 04:50:05.837203 4923 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42b675647e2ea783514950351c53518abc6ef4373046e0062a04310652e4e930" Mar 21 04:50:05 crc kubenswrapper[4923]: I0321 04:50:05.837149 4923 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29567810-hs4gq" Mar 21 04:50:06 crc kubenswrapper[4923]: I0321 04:50:06.195387 4923 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29567804-9mhfl"] Mar 21 04:50:06 crc kubenswrapper[4923]: I0321 04:50:06.210714 4923 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29567804-9mhfl"] Mar 21 04:50:06 crc kubenswrapper[4923]: I0321 04:50:06.365568 4923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4324e451-0ba5-4627-b39e-598291c01252" path="/var/lib/kubelet/pods/4324e451-0ba5-4627-b39e-598291c01252/volumes" Mar 21 04:50:19 crc kubenswrapper[4923]: I0321 04:50:19.256228 4923 scope.go:117] "RemoveContainer" containerID="147840b5a4978c8954bf46a9367f86305bd3a9746ee810fca755fe8ba6c89e4f" Mar 21 04:50:33 crc kubenswrapper[4923]: I0321 04:50:33.235543 4923 patch_prober.go:28] interesting pod/machine-config-daemon-cv5gr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 04:50:33 crc kubenswrapper[4923]: I0321 04:50:33.236168 4923 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 04:51:03 crc kubenswrapper[4923]: I0321 04:51:03.235747 4923 patch_prober.go:28] interesting pod/machine-config-daemon-cv5gr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 04:51:03 crc kubenswrapper[4923]: I0321 04:51:03.237938 4923 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 04:51:33 crc kubenswrapper[4923]: I0321 04:51:33.235718 4923 patch_prober.go:28] interesting pod/machine-config-daemon-cv5gr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 21 04:51:33 crc kubenswrapper[4923]: I0321 04:51:33.236205 4923 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 21 04:51:33 crc kubenswrapper[4923]: I0321 04:51:33.236255 4923 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" Mar 21 04:51:33 crc kubenswrapper[4923]: I0321 04:51:33.236947 4923 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"83cd8e32365a453dcfdb5a57011f0d61d2d4a1d8ad826dd8bdaa91c0f8253fda"} pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 21 04:51:33 crc kubenswrapper[4923]: I0321 04:51:33.237005 4923 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" podUID="34cdf206-b121-415c-ae40-21245192e724" containerName="machine-config-daemon" containerID="cri-o://83cd8e32365a453dcfdb5a57011f0d61d2d4a1d8ad826dd8bdaa91c0f8253fda" gracePeriod=600 Mar 21 04:51:33 crc kubenswrapper[4923]: I0321 04:51:33.374149 4923 generic.go:334] "Generic (PLEG): container finished" podID="34cdf206-b121-415c-ae40-21245192e724" containerID="83cd8e32365a453dcfdb5a57011f0d61d2d4a1d8ad826dd8bdaa91c0f8253fda" exitCode=0 Mar 21 04:51:33 crc kubenswrapper[4923]: I0321 04:51:33.374196 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" event={"ID":"34cdf206-b121-415c-ae40-21245192e724","Type":"ContainerDied","Data":"83cd8e32365a453dcfdb5a57011f0d61d2d4a1d8ad826dd8bdaa91c0f8253fda"} Mar 21 04:51:33 crc kubenswrapper[4923]: I0321 04:51:33.374229 4923 scope.go:117] "RemoveContainer" containerID="1906846df589a28da2c4bc8c6813ce1b92af42bb64d7a9bf02fe39056c6e9a04" Mar 21 04:51:34 crc kubenswrapper[4923]: I0321 04:51:34.380863 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cv5gr" event={"ID":"34cdf206-b121-415c-ae40-21245192e724","Type":"ContainerStarted","Data":"b185a3e1ebf9e35b24c73beb202a7026a4d4c53c9496c75ad80654595678625a"} Mar 21 04:51:36 crc kubenswrapper[4923]: I0321 04:51:36.516519 4923 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9vtjk"] Mar 21 04:51:36 crc kubenswrapper[4923]: E0321 04:51:36.517045 4923 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beb6bab5-42b9-417c-9417-156e78ea0f30" containerName="oc" Mar 21 04:51:36 crc kubenswrapper[4923]: I0321 04:51:36.517059 4923 state_mem.go:107] "Deleted CPUSet assignment" podUID="beb6bab5-42b9-417c-9417-156e78ea0f30" containerName="oc" Mar 21 04:51:36 crc kubenswrapper[4923]: I0321 04:51:36.517182 4923 memory_manager.go:354] "RemoveStaleState removing state" podUID="beb6bab5-42b9-417c-9417-156e78ea0f30" containerName="oc" Mar 21 04:51:36 crc kubenswrapper[4923]: I0321 04:51:36.517978 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9vtjk" Mar 21 04:51:36 crc kubenswrapper[4923]: I0321 04:51:36.565138 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9vtjk"] Mar 21 04:51:36 crc kubenswrapper[4923]: I0321 04:51:36.653896 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5155a723-1faa-4024-a470-67c9b83e1f88-utilities\") pod \"redhat-operators-9vtjk\" (UID: \"5155a723-1faa-4024-a470-67c9b83e1f88\") " pod="openshift-marketplace/redhat-operators-9vtjk" Mar 21 04:51:36 crc kubenswrapper[4923]: I0321 04:51:36.654305 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5155a723-1faa-4024-a470-67c9b83e1f88-catalog-content\") pod \"redhat-operators-9vtjk\" (UID: \"5155a723-1faa-4024-a470-67c9b83e1f88\") " pod="openshift-marketplace/redhat-operators-9vtjk" Mar 21 04:51:36 crc kubenswrapper[4923]: I0321 04:51:36.654374 4923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbxvf\" (UniqueName: \"kubernetes.io/projected/5155a723-1faa-4024-a470-67c9b83e1f88-kube-api-access-gbxvf\") pod \"redhat-operators-9vtjk\" (UID: \"5155a723-1faa-4024-a470-67c9b83e1f88\") " pod="openshift-marketplace/redhat-operators-9vtjk" Mar 21 04:51:36 crc kubenswrapper[4923]: I0321 04:51:36.755710 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5155a723-1faa-4024-a470-67c9b83e1f88-utilities\") pod \"redhat-operators-9vtjk\" (UID: \"5155a723-1faa-4024-a470-67c9b83e1f88\") " pod="openshift-marketplace/redhat-operators-9vtjk" Mar 21 04:51:36 crc kubenswrapper[4923]: I0321 04:51:36.755791 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5155a723-1faa-4024-a470-67c9b83e1f88-catalog-content\") pod \"redhat-operators-9vtjk\" (UID: \"5155a723-1faa-4024-a470-67c9b83e1f88\") " pod="openshift-marketplace/redhat-operators-9vtjk" Mar 21 04:51:36 crc kubenswrapper[4923]: I0321 04:51:36.755826 4923 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbxvf\" (UniqueName: \"kubernetes.io/projected/5155a723-1faa-4024-a470-67c9b83e1f88-kube-api-access-gbxvf\") pod \"redhat-operators-9vtjk\" (UID: \"5155a723-1faa-4024-a470-67c9b83e1f88\") " pod="openshift-marketplace/redhat-operators-9vtjk" Mar 21 04:51:36 crc kubenswrapper[4923]: I0321 04:51:36.756444 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5155a723-1faa-4024-a470-67c9b83e1f88-catalog-content\") pod \"redhat-operators-9vtjk\" (UID: \"5155a723-1faa-4024-a470-67c9b83e1f88\") " pod="openshift-marketplace/redhat-operators-9vtjk" Mar 21 04:51:36 crc kubenswrapper[4923]: I0321 04:51:36.756699 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5155a723-1faa-4024-a470-67c9b83e1f88-utilities\") pod \"redhat-operators-9vtjk\" (UID: \"5155a723-1faa-4024-a470-67c9b83e1f88\") " pod="openshift-marketplace/redhat-operators-9vtjk" Mar 21 04:51:36 crc kubenswrapper[4923]: I0321 04:51:36.777164 4923 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbxvf\" (UniqueName: \"kubernetes.io/projected/5155a723-1faa-4024-a470-67c9b83e1f88-kube-api-access-gbxvf\") pod \"redhat-operators-9vtjk\" (UID: \"5155a723-1faa-4024-a470-67c9b83e1f88\") " pod="openshift-marketplace/redhat-operators-9vtjk" Mar 21 04:51:36 crc kubenswrapper[4923]: I0321 04:51:36.849180 4923 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9vtjk" Mar 21 04:51:37 crc kubenswrapper[4923]: I0321 04:51:37.042565 4923 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9vtjk"] Mar 21 04:51:37 crc kubenswrapper[4923]: I0321 04:51:37.396952 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vtjk" event={"ID":"5155a723-1faa-4024-a470-67c9b83e1f88","Type":"ContainerStarted","Data":"ff2c5a2731a4e770fdf95ee4c9544618fe097b5480718655e90769b1e966d5e6"} Mar 21 04:51:37 crc kubenswrapper[4923]: I0321 04:51:37.397004 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vtjk" event={"ID":"5155a723-1faa-4024-a470-67c9b83e1f88","Type":"ContainerStarted","Data":"4e1c49dfa0b47da40dde2fe88bda64da87091693bb972dd7a5fe2a554fedf47b"} Mar 21 04:51:38 crc kubenswrapper[4923]: I0321 04:51:38.403290 4923 generic.go:334] "Generic (PLEG): container finished" podID="5155a723-1faa-4024-a470-67c9b83e1f88" containerID="ff2c5a2731a4e770fdf95ee4c9544618fe097b5480718655e90769b1e966d5e6" exitCode=0 Mar 21 04:51:38 crc kubenswrapper[4923]: I0321 04:51:38.403350 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vtjk" event={"ID":"5155a723-1faa-4024-a470-67c9b83e1f88","Type":"ContainerDied","Data":"ff2c5a2731a4e770fdf95ee4c9544618fe097b5480718655e90769b1e966d5e6"} Mar 21 04:51:40 crc kubenswrapper[4923]: I0321 04:51:40.415031 4923 generic.go:334] "Generic (PLEG): container finished" podID="5155a723-1faa-4024-a470-67c9b83e1f88" containerID="fd514263ca1e74af0a5da35a4aa133d6be6cb0b192857db50d8e6cec84c98fd0" exitCode=0 Mar 21 04:51:40 crc kubenswrapper[4923]: I0321 04:51:40.415219 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vtjk" event={"ID":"5155a723-1faa-4024-a470-67c9b83e1f88","Type":"ContainerDied","Data":"fd514263ca1e74af0a5da35a4aa133d6be6cb0b192857db50d8e6cec84c98fd0"} Mar 21 04:51:41 crc kubenswrapper[4923]: I0321 04:51:41.432459 4923 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vtjk" event={"ID":"5155a723-1faa-4024-a470-67c9b83e1f88","Type":"ContainerStarted","Data":"5ecd11cfda5f7a9c96859ab043f9d3376096bc70600243f8b4c7b29c1c0ddf13"} Mar 21 04:51:41 crc kubenswrapper[4923]: I0321 04:51:41.465605 4923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9vtjk" podStartSLOduration=3.048197098 podStartE2EDuration="5.465580359s" podCreationTimestamp="2026-03-21 04:51:36 +0000 UTC" firstStartedPulling="2026-03-21 04:51:38.405744329 +0000 UTC m=+2063.558755416" lastFinishedPulling="2026-03-21 04:51:40.82312755 +0000 UTC m=+2065.976138677" observedRunningTime="2026-03-21 04:51:41.453848387 +0000 UTC m=+2066.606859474" watchObservedRunningTime="2026-03-21 04:51:41.465580359 +0000 UTC m=+2066.618591456"