Feb 26 09:42:33 crc systemd[1]: Starting Kubernetes Kubelet... Feb 26 09:42:33 crc restorecon[4707]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 09:42:33 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 09:42:34 crc restorecon[4707]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 09:42:34 crc restorecon[4707]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 26 09:42:36 crc kubenswrapper[4760]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 26 09:42:36 crc kubenswrapper[4760]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 26 09:42:36 crc kubenswrapper[4760]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 26 09:42:36 crc kubenswrapper[4760]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 26 09:42:36 crc kubenswrapper[4760]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 26 09:42:36 crc kubenswrapper[4760]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.066735 4760 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073405 4760 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073441 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073451 4760 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073463 4760 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073475 4760 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073484 4760 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073493 4760 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073501 4760 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073509 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073517 4760 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073524 4760 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073532 4760 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073539 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073550 4760 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073559 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073568 4760 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073601 4760 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073609 4760 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073618 4760 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073628 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073636 4760 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073644 4760 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073651 4760 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073659 4760 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073667 4760 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073675 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073683 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073691 4760 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073700 4760 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073710 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073720 4760 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073730 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073739 4760 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073749 4760 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073758 4760 feature_gate.go:330] unrecognized feature gate: Example Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073766 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073773 4760 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073781 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073788 4760 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073796 4760 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073805 4760 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073812 4760 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073822 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073829 4760 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073838 4760 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073847 4760 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073857 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073866 4760 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073876 4760 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073891 4760 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073903 4760 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073916 4760 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073928 4760 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073938 4760 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073949 4760 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073959 4760 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073968 4760 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073978 4760 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.073990 4760 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.074000 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.074010 4760 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.074019 4760 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.074029 4760 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.074038 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.074047 4760 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.074056 4760 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.074066 4760 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.074075 4760 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.074084 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.074095 4760 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.074105 4760 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074375 4760 flags.go:64] FLAG: --address="0.0.0.0" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074401 4760 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074422 4760 flags.go:64] FLAG: --anonymous-auth="true" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074436 4760 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074451 4760 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074463 4760 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074479 4760 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074492 4760 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074503 4760 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074514 4760 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074527 4760 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074541 4760 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074552 4760 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074563 4760 flags.go:64] FLAG: --cgroup-root="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074624 4760 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074638 4760 flags.go:64] FLAG: --client-ca-file="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074650 4760 flags.go:64] FLAG: --cloud-config="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074661 4760 flags.go:64] FLAG: --cloud-provider="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074672 4760 flags.go:64] FLAG: --cluster-dns="[]" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074686 4760 flags.go:64] FLAG: --cluster-domain="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074698 4760 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074710 4760 flags.go:64] FLAG: --config-dir="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074720 4760 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074732 4760 flags.go:64] FLAG: --container-log-max-files="5" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074747 4760 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074759 4760 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074770 4760 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074782 4760 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074794 4760 flags.go:64] FLAG: --contention-profiling="false" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074806 4760 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074818 4760 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074830 4760 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074841 4760 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074855 4760 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074867 4760 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074878 4760 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074923 4760 flags.go:64] FLAG: --enable-load-reader="false" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074934 4760 flags.go:64] FLAG: --enable-server="true" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074945 4760 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074962 4760 flags.go:64] FLAG: --event-burst="100" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074974 4760 flags.go:64] FLAG: --event-qps="50" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074984 4760 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.074996 4760 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075007 4760 flags.go:64] FLAG: --eviction-hard="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075022 4760 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075033 4760 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075044 4760 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075059 4760 flags.go:64] FLAG: --eviction-soft="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075070 4760 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075081 4760 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075093 4760 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075106 4760 flags.go:64] FLAG: --experimental-mounter-path="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075117 4760 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075128 4760 flags.go:64] FLAG: --fail-swap-on="true" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075140 4760 flags.go:64] FLAG: --feature-gates="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075154 4760 flags.go:64] FLAG: --file-check-frequency="20s" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075166 4760 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075177 4760 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075189 4760 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075201 4760 flags.go:64] FLAG: --healthz-port="10248" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075213 4760 flags.go:64] FLAG: --help="false" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075225 4760 flags.go:64] FLAG: --hostname-override="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075236 4760 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075248 4760 flags.go:64] FLAG: --http-check-frequency="20s" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075260 4760 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075270 4760 flags.go:64] FLAG: --image-credential-provider-config="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075281 4760 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075291 4760 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075303 4760 flags.go:64] FLAG: --image-service-endpoint="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075313 4760 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075323 4760 flags.go:64] FLAG: --kube-api-burst="100" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075335 4760 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075347 4760 flags.go:64] FLAG: --kube-api-qps="50" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075357 4760 flags.go:64] FLAG: --kube-reserved="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075369 4760 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075380 4760 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075391 4760 flags.go:64] FLAG: --kubelet-cgroups="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075401 4760 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075412 4760 flags.go:64] FLAG: --lock-file="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075423 4760 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075435 4760 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075447 4760 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075464 4760 flags.go:64] FLAG: --log-json-split-stream="false" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075478 4760 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075488 4760 flags.go:64] FLAG: --log-text-split-stream="false" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075497 4760 flags.go:64] FLAG: --logging-format="text" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075505 4760 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075515 4760 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075524 4760 flags.go:64] FLAG: --manifest-url="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075533 4760 flags.go:64] FLAG: --manifest-url-header="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075544 4760 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075553 4760 flags.go:64] FLAG: --max-open-files="1000000" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075596 4760 flags.go:64] FLAG: --max-pods="110" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075610 4760 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075619 4760 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075628 4760 flags.go:64] FLAG: --memory-manager-policy="None" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075637 4760 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075646 4760 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075655 4760 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075664 4760 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075686 4760 flags.go:64] FLAG: --node-status-max-images="50" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075697 4760 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075709 4760 flags.go:64] FLAG: --oom-score-adj="-999" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075720 4760 flags.go:64] FLAG: --pod-cidr="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075732 4760 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075750 4760 flags.go:64] FLAG: --pod-manifest-path="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075759 4760 flags.go:64] FLAG: --pod-max-pids="-1" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075768 4760 flags.go:64] FLAG: --pods-per-core="0" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075777 4760 flags.go:64] FLAG: --port="10250" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075786 4760 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075795 4760 flags.go:64] FLAG: --provider-id="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075803 4760 flags.go:64] FLAG: --qos-reserved="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075812 4760 flags.go:64] FLAG: --read-only-port="10255" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075821 4760 flags.go:64] FLAG: --register-node="true" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075830 4760 flags.go:64] FLAG: --register-schedulable="true" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075839 4760 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075855 4760 flags.go:64] FLAG: --registry-burst="10" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075863 4760 flags.go:64] FLAG: --registry-qps="5" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075872 4760 flags.go:64] FLAG: --reserved-cpus="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075882 4760 flags.go:64] FLAG: --reserved-memory="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075893 4760 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075903 4760 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075912 4760 flags.go:64] FLAG: --rotate-certificates="false" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075921 4760 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075930 4760 flags.go:64] FLAG: --runonce="false" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075939 4760 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075948 4760 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075957 4760 flags.go:64] FLAG: --seccomp-default="false" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075966 4760 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075974 4760 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075984 4760 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.075993 4760 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.076002 4760 flags.go:64] FLAG: --storage-driver-password="root" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.076011 4760 flags.go:64] FLAG: --storage-driver-secure="false" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.076020 4760 flags.go:64] FLAG: --storage-driver-table="stats" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.076028 4760 flags.go:64] FLAG: --storage-driver-user="root" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.076037 4760 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.076046 4760 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.076056 4760 flags.go:64] FLAG: --system-cgroups="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.076064 4760 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.076077 4760 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.076087 4760 flags.go:64] FLAG: --tls-cert-file="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.076095 4760 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.076107 4760 flags.go:64] FLAG: --tls-min-version="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.076116 4760 flags.go:64] FLAG: --tls-private-key-file="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.076125 4760 flags.go:64] FLAG: --topology-manager-policy="none" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.076134 4760 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.076143 4760 flags.go:64] FLAG: --topology-manager-scope="container" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.076153 4760 flags.go:64] FLAG: --v="2" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.076164 4760 flags.go:64] FLAG: --version="false" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.076175 4760 flags.go:64] FLAG: --vmodule="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.076186 4760 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.076195 4760 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076451 4760 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076472 4760 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076486 4760 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076496 4760 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076507 4760 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076516 4760 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076526 4760 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076535 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076547 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076557 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076566 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076613 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076623 4760 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076633 4760 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076643 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076653 4760 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076667 4760 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076681 4760 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076692 4760 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076704 4760 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076715 4760 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076725 4760 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076735 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076745 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076756 4760 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076766 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076787 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076797 4760 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076806 4760 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076818 4760 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076830 4760 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076843 4760 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076856 4760 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076867 4760 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076877 4760 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076888 4760 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076897 4760 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076907 4760 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076923 4760 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076932 4760 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076943 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076953 4760 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076962 4760 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076972 4760 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076981 4760 feature_gate.go:330] unrecognized feature gate: Example Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.076991 4760 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.077002 4760 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.077011 4760 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.077019 4760 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.077028 4760 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.077038 4760 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.077047 4760 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.077057 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.077066 4760 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.077075 4760 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.077087 4760 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.077099 4760 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.077109 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.077121 4760 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.077129 4760 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.077137 4760 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.077145 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.077152 4760 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.077160 4760 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.077168 4760 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.077175 4760 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.077183 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.077201 4760 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.077210 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.077217 4760 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.077225 4760 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.079051 4760 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.103981 4760 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.104036 4760 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104156 4760 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104168 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104179 4760 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104187 4760 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104195 4760 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104204 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104212 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104220 4760 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104231 4760 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104242 4760 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104252 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104264 4760 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104273 4760 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104284 4760 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104313 4760 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104323 4760 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104330 4760 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104338 4760 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104345 4760 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104355 4760 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104363 4760 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104371 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104378 4760 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104385 4760 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104393 4760 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104401 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104409 4760 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104420 4760 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104432 4760 feature_gate.go:330] unrecognized feature gate: Example Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104440 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104449 4760 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104457 4760 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104465 4760 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104473 4760 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104480 4760 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104488 4760 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104496 4760 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104504 4760 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104512 4760 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104519 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104527 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104535 4760 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104544 4760 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104552 4760 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104560 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104567 4760 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104610 4760 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104621 4760 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104631 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104641 4760 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104648 4760 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104657 4760 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104665 4760 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104673 4760 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104681 4760 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104691 4760 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104698 4760 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104706 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104714 4760 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104722 4760 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104729 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104737 4760 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104745 4760 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104753 4760 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104760 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104768 4760 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104777 4760 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104786 4760 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104795 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104802 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.104809 4760 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.104822 4760 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105038 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105050 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105060 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105068 4760 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105076 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105085 4760 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105094 4760 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105103 4760 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105111 4760 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105118 4760 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105126 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105134 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105142 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105149 4760 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105157 4760 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105164 4760 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105175 4760 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105185 4760 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105193 4760 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105203 4760 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105211 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105220 4760 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105229 4760 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105237 4760 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105247 4760 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105257 4760 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105266 4760 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105274 4760 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105281 4760 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105289 4760 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105297 4760 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105304 4760 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105312 4760 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105319 4760 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105327 4760 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105334 4760 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105342 4760 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105350 4760 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105357 4760 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105364 4760 feature_gate.go:330] unrecognized feature gate: Example Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105372 4760 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105380 4760 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105387 4760 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105395 4760 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105402 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105410 4760 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105417 4760 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105426 4760 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105433 4760 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105441 4760 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105448 4760 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105457 4760 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105467 4760 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105477 4760 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105485 4760 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105494 4760 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105502 4760 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105511 4760 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105522 4760 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105531 4760 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105540 4760 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105548 4760 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105557 4760 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105566 4760 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105600 4760 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105608 4760 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105616 4760 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105624 4760 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105632 4760 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105639 4760 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.105648 4760 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.105660 4760 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.106911 4760 server.go:940] "Client rotation is on, will bootstrap in background" Feb 26 09:42:36 crc kubenswrapper[4760]: E0226 09:42:36.118999 4760 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.135660 4760 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.135775 4760 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.137481 4760 server.go:997] "Starting client certificate rotation" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.137510 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.137698 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.195024 4760 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 26 09:42:36 crc kubenswrapper[4760]: E0226 09:42:36.199949 4760 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.211695 4760 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.249415 4760 log.go:25] "Validated CRI v1 runtime API" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.335229 4760 log.go:25] "Validated CRI v1 image API" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.338263 4760 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.351870 4760 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-26-09-37-56-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.351932 4760 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.382036 4760 manager.go:217] Machine: {Timestamp:2026-02-26 09:42:36.379005985 +0000 UTC m=+1.512951558 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:d0ce6fb9-1a58-4f12-a8d7-d211a8dd8bec BootID:033b4752-b4ba-4135-ad78-818bf8875f86 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:9e:bc:14 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:9e:bc:14 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:71:4d:d7 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:66:67:a3 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:75:2c:5f Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:19:1c:3d Speed:-1 Mtu:1496} {Name:eth10 MacAddress:7e:41:09:df:a0:63 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:9a:85:8e:df:d8:28 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.382461 4760 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.382703 4760 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.383143 4760 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.383476 4760 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.383538 4760 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.383905 4760 topology_manager.go:138] "Creating topology manager with none policy" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.383923 4760 container_manager_linux.go:303] "Creating device plugin manager" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.385779 4760 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.385841 4760 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.389496 4760 state_mem.go:36] "Initialized new in-memory state store" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.389699 4760 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.411117 4760 kubelet.go:418] "Attempting to sync node with API server" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.411167 4760 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.411197 4760 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.411218 4760 kubelet.go:324] "Adding apiserver pod source" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.411239 4760 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.415687 4760 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.416719 4760 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.418375 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Feb 26 09:42:36 crc kubenswrapper[4760]: E0226 09:42:36.418443 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.418540 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Feb 26 09:42:36 crc kubenswrapper[4760]: E0226 09:42:36.418657 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.427355 4760 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.431069 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.431123 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.431143 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.431156 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.431176 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.431187 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.431199 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.431216 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.431233 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.431247 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.431278 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.431288 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.431321 4760 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.431883 4760 server.go:1280] "Started kubelet" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.435253 4760 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.436235 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Feb 26 09:42:36 crc systemd[1]: Started Kubernetes Kubelet. Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.440940 4760 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.442252 4760 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.444013 4760 server.go:460] "Adding debug handlers to kubelet server" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.444450 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.444536 4760 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.452985 4760 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.453473 4760 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 26 09:42:36 crc kubenswrapper[4760]: E0226 09:42:36.453238 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.453109 4760 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.453647 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Feb 26 09:42:36 crc kubenswrapper[4760]: E0226 09:42:36.453726 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Feb 26 09:42:36 crc kubenswrapper[4760]: E0226 09:42:36.453863 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="200ms" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.457314 4760 factory.go:55] Registering systemd factory Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.457348 4760 factory.go:221] Registration of the systemd container factory successfully Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.458164 4760 factory.go:153] Registering CRI-O factory Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.458185 4760 factory.go:221] Registration of the crio container factory successfully Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.458259 4760 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.458297 4760 factory.go:103] Registering Raw factory Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.458317 4760 manager.go:1196] Started watching for new ooms in manager Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.460122 4760 manager.go:319] Starting recovery of all containers Feb 26 09:42:36 crc kubenswrapper[4760]: E0226 09:42:36.458835 4760 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.107:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1897c29ca4f5b308 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.431831816 +0000 UTC m=+1.565777319,LastTimestamp:2026-02-26 09:42:36.431831816 +0000 UTC m=+1.565777319,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.479471 4760 manager.go:324] Recovery completed Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.483911 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484021 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484039 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484051 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484063 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484074 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484086 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484096 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484112 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484122 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484133 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484145 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484155 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484170 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484181 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484192 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484203 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484214 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484229 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484242 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484256 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484266 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484278 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484290 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484303 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484315 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484328 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484343 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484369 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484381 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484393 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484409 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484423 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484434 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484445 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484457 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484468 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484479 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484491 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484501 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484513 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484525 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484540 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484585 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484599 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484611 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484621 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484631 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484642 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484653 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484665 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484675 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484691 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484702 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484713 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484726 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484737 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484750 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484761 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484773 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484784 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484796 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484808 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484820 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484831 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484842 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484853 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484864 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484877 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484887 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484898 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484912 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484923 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484934 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484945 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484955 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484967 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484979 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.484990 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485001 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485019 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485033 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485045 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485056 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485067 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485079 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485090 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485101 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485113 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485123 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485136 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485146 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485158 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485169 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485183 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485193 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485207 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485241 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485256 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485270 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485285 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485298 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485310 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485336 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485356 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485369 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485381 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485392 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485405 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485418 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485431 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485443 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485454 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485466 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485476 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485489 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485501 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485514 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485526 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485537 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485547 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485559 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485582 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485593 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485608 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485620 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485630 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485641 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485652 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485664 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485675 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485688 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485698 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485709 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485721 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485733 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485744 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485755 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485767 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485778 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485791 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485803 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485814 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485825 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485837 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485848 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485859 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485869 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485880 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485890 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485901 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485912 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485922 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485933 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485957 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485968 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485979 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.485989 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.491442 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.493036 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.493089 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.493102 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.493875 4760 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.493898 4760 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.493923 4760 state_mem.go:36] "Initialized new in-memory state store" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.504630 4760 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.504707 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.504728 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.504749 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.504763 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.504783 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.504798 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.504815 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.504832 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.504847 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.504862 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.504883 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.504902 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.504921 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.504942 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.504962 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.504981 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.504997 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505011 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505041 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505056 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505070 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505084 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505099 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505123 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505137 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505153 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505172 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505190 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505209 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505223 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505240 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505254 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505268 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505282 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505297 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505313 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505326 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505341 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505355 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505371 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505385 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505398 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505411 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505425 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505439 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505453 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505468 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505482 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505496 4760 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505508 4760 reconstruct.go:97] "Volume reconstruction finished" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.505518 4760 reconciler.go:26] "Reconciler: start to sync state" Feb 26 09:42:36 crc kubenswrapper[4760]: E0226 09:42:36.558887 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.572204 4760 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.574979 4760 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.575054 4760 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.575088 4760 kubelet.go:2335] "Starting kubelet main sync loop" Feb 26 09:42:36 crc kubenswrapper[4760]: E0226 09:42:36.575149 4760 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 26 09:42:36 crc kubenswrapper[4760]: W0226 09:42:36.577299 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Feb 26 09:42:36 crc kubenswrapper[4760]: E0226 09:42:36.577353 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.582224 4760 policy_none.go:49] "None policy: Start" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.591432 4760 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.591522 4760 state_mem.go:35] "Initializing new in-memory state store" Feb 26 09:42:36 crc kubenswrapper[4760]: E0226 09:42:36.654808 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="400ms" Feb 26 09:42:36 crc kubenswrapper[4760]: E0226 09:42:36.658989 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.659819 4760 manager.go:334] "Starting Device Plugin manager" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.660084 4760 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.660113 4760 server.go:79] "Starting device plugin registration server" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.660503 4760 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.660517 4760 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.660752 4760 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.660837 4760 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.660851 4760 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 26 09:42:36 crc kubenswrapper[4760]: E0226 09:42:36.668177 4760 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.675901 4760 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.676059 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.677656 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.677688 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.677698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.677846 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.678373 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.678427 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.678797 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.678861 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.678886 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.679118 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.679517 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.679557 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.679647 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.679556 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.679684 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.680390 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.680419 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.680427 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.680519 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.680606 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.680643 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.680875 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.680913 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.680929 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.681387 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.681453 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.681478 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.681658 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.681677 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.681687 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.681728 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.681890 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.681943 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.682816 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.682838 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.682846 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.683618 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.683676 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.683696 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.684009 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.684075 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.686729 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.686778 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.686792 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.761252 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.762972 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.763008 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.763018 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.763044 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 09:42:36 crc kubenswrapper[4760]: E0226 09:42:36.763796 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.107:6443: connect: connection refused" node="crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.808861 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.808916 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.808937 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.808957 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.808978 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.809071 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.809138 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.809167 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.809261 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.809350 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.809515 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.809549 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.809608 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.809648 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.809677 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.910710 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.910782 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.910820 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.910854 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.910921 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.910950 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.910978 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.911007 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.911042 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.911072 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.911101 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.911129 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.911160 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.911187 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.911224 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.911769 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.911867 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.911926 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.911969 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.911945 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.911995 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.912003 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.911892 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.912030 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.912043 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.912089 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.912032 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.911988 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.912059 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.912070 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.963975 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.965206 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.965284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.965298 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:36 crc kubenswrapper[4760]: I0226 09:42:36.965377 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 09:42:36 crc kubenswrapper[4760]: E0226 09:42:36.966193 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.107:6443: connect: connection refused" node="crc" Feb 26 09:42:37 crc kubenswrapper[4760]: I0226 09:42:37.018089 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 09:42:37 crc kubenswrapper[4760]: I0226 09:42:37.036164 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 26 09:42:37 crc kubenswrapper[4760]: I0226 09:42:37.042434 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:42:37 crc kubenswrapper[4760]: E0226 09:42:37.055860 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="800ms" Feb 26 09:42:37 crc kubenswrapper[4760]: I0226 09:42:37.062177 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 09:42:37 crc kubenswrapper[4760]: I0226 09:42:37.067318 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 09:42:37 crc kubenswrapper[4760]: W0226 09:42:37.145608 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-01e102567deadc2e0014b1cf92b3728fa609f1e15c3240d8bc758b085f6ecd1d WatchSource:0}: Error finding container 01e102567deadc2e0014b1cf92b3728fa609f1e15c3240d8bc758b085f6ecd1d: Status 404 returned error can't find the container with id 01e102567deadc2e0014b1cf92b3728fa609f1e15c3240d8bc758b085f6ecd1d Feb 26 09:42:37 crc kubenswrapper[4760]: W0226 09:42:37.146954 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-c114c4112fe0c642975d96fbbd071361423afbde2a5010b3d0ce5b47c15b6adc WatchSource:0}: Error finding container c114c4112fe0c642975d96fbbd071361423afbde2a5010b3d0ce5b47c15b6adc: Status 404 returned error can't find the container with id c114c4112fe0c642975d96fbbd071361423afbde2a5010b3d0ce5b47c15b6adc Feb 26 09:42:37 crc kubenswrapper[4760]: W0226 09:42:37.148323 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-e9635250aa49aff05796787979d6c380267fdfc1cf689d466aea653fe6baa544 WatchSource:0}: Error finding container e9635250aa49aff05796787979d6c380267fdfc1cf689d466aea653fe6baa544: Status 404 returned error can't find the container with id e9635250aa49aff05796787979d6c380267fdfc1cf689d466aea653fe6baa544 Feb 26 09:42:37 crc kubenswrapper[4760]: W0226 09:42:37.151331 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-108d302acbf05f2a666a872db12a3aba6f13eca8f13764bb0419d9528529e7dd WatchSource:0}: Error finding container 108d302acbf05f2a666a872db12a3aba6f13eca8f13764bb0419d9528529e7dd: Status 404 returned error can't find the container with id 108d302acbf05f2a666a872db12a3aba6f13eca8f13764bb0419d9528529e7dd Feb 26 09:42:37 crc kubenswrapper[4760]: W0226 09:42:37.154772 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-79de5e3a4b7cebbaaaa169decace8aaab3af34de481ddec93f3e28afaa35b0ae WatchSource:0}: Error finding container 79de5e3a4b7cebbaaaa169decace8aaab3af34de481ddec93f3e28afaa35b0ae: Status 404 returned error can't find the container with id 79de5e3a4b7cebbaaaa169decace8aaab3af34de481ddec93f3e28afaa35b0ae Feb 26 09:42:37 crc kubenswrapper[4760]: W0226 09:42:37.309601 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Feb 26 09:42:37 crc kubenswrapper[4760]: E0226 09:42:37.309960 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Feb 26 09:42:37 crc kubenswrapper[4760]: I0226 09:42:37.366675 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:37 crc kubenswrapper[4760]: I0226 09:42:37.367708 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:37 crc kubenswrapper[4760]: I0226 09:42:37.367736 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:37 crc kubenswrapper[4760]: I0226 09:42:37.367744 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:37 crc kubenswrapper[4760]: I0226 09:42:37.367767 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 09:42:37 crc kubenswrapper[4760]: E0226 09:42:37.368186 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.107:6443: connect: connection refused" node="crc" Feb 26 09:42:37 crc kubenswrapper[4760]: W0226 09:42:37.413940 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Feb 26 09:42:37 crc kubenswrapper[4760]: E0226 09:42:37.414016 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Feb 26 09:42:37 crc kubenswrapper[4760]: I0226 09:42:37.437173 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Feb 26 09:42:37 crc kubenswrapper[4760]: I0226 09:42:37.580896 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e9635250aa49aff05796787979d6c380267fdfc1cf689d466aea653fe6baa544"} Feb 26 09:42:37 crc kubenswrapper[4760]: I0226 09:42:37.581992 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"79de5e3a4b7cebbaaaa169decace8aaab3af34de481ddec93f3e28afaa35b0ae"} Feb 26 09:42:37 crc kubenswrapper[4760]: I0226 09:42:37.582742 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c114c4112fe0c642975d96fbbd071361423afbde2a5010b3d0ce5b47c15b6adc"} Feb 26 09:42:37 crc kubenswrapper[4760]: I0226 09:42:37.583543 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"108d302acbf05f2a666a872db12a3aba6f13eca8f13764bb0419d9528529e7dd"} Feb 26 09:42:37 crc kubenswrapper[4760]: I0226 09:42:37.584255 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"01e102567deadc2e0014b1cf92b3728fa609f1e15c3240d8bc758b085f6ecd1d"} Feb 26 09:42:37 crc kubenswrapper[4760]: W0226 09:42:37.844025 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Feb 26 09:42:37 crc kubenswrapper[4760]: E0226 09:42:37.844143 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Feb 26 09:42:37 crc kubenswrapper[4760]: E0226 09:42:37.857296 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="1.6s" Feb 26 09:42:38 crc kubenswrapper[4760]: W0226 09:42:38.041963 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Feb 26 09:42:38 crc kubenswrapper[4760]: E0226 09:42:38.042067 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Feb 26 09:42:38 crc kubenswrapper[4760]: I0226 09:42:38.168958 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:38 crc kubenswrapper[4760]: I0226 09:42:38.170698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:38 crc kubenswrapper[4760]: I0226 09:42:38.170761 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:38 crc kubenswrapper[4760]: I0226 09:42:38.170780 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:38 crc kubenswrapper[4760]: I0226 09:42:38.170822 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 09:42:38 crc kubenswrapper[4760]: E0226 09:42:38.171485 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.107:6443: connect: connection refused" node="crc" Feb 26 09:42:38 crc kubenswrapper[4760]: I0226 09:42:38.307935 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 09:42:38 crc kubenswrapper[4760]: E0226 09:42:38.309246 4760 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Feb 26 09:42:38 crc kubenswrapper[4760]: I0226 09:42:38.437702 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.437431 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Feb 26 09:42:39 crc kubenswrapper[4760]: E0226 09:42:39.458150 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="3.2s" Feb 26 09:42:39 crc kubenswrapper[4760]: W0226 09:42:39.578314 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Feb 26 09:42:39 crc kubenswrapper[4760]: E0226 09:42:39.578448 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.592467 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"54fded501ee4a42db6029006dead3d4edaf44ba6c748b8ca880efd3b039cd24f"} Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.594635 4760 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="1a8b7ccf1c5da8ff9606c3c7c4651bc145f0830ab14e4c53866eba60433562de" exitCode=0 Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.594738 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"1a8b7ccf1c5da8ff9606c3c7c4651bc145f0830ab14e4c53866eba60433562de"} Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.594816 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.595906 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.595945 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.595957 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.597110 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a5e887362d4731b06c7ca639e3c1a69ae25e933cfc6bef5534cfa022ab97b09c" exitCode=0 Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.597189 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a5e887362d4731b06c7ca639e3c1a69ae25e933cfc6bef5534cfa022ab97b09c"} Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.597252 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.598299 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.598373 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.598402 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.599755 4760 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="dc3eb599a1f735acada2bd5ef1d9e0020dcecbb4070dc5769af0873dad812da0" exitCode=0 Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.599866 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.599807 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"dc3eb599a1f735acada2bd5ef1d9e0020dcecbb4070dc5769af0873dad812da0"} Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.600642 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.601444 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.601480 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.601498 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.601512 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.601534 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.601544 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.602465 4760 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="df85e93987f9791f43b10720a51dcf4d4c24234ce049aff03801cf4dc368ba01" exitCode=0 Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.602495 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"df85e93987f9791f43b10720a51dcf4d4c24234ce049aff03801cf4dc368ba01"} Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.602616 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.603417 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.603450 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.603463 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.772484 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.774136 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.774186 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.774198 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:39 crc kubenswrapper[4760]: I0226 09:42:39.774232 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 09:42:39 crc kubenswrapper[4760]: E0226 09:42:39.774709 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.107:6443: connect: connection refused" node="crc" Feb 26 09:42:40 crc kubenswrapper[4760]: W0226 09:42:40.125470 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Feb 26 09:42:40 crc kubenswrapper[4760]: E0226 09:42:40.125631 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Feb 26 09:42:40 crc kubenswrapper[4760]: I0226 09:42:40.437006 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Feb 26 09:42:40 crc kubenswrapper[4760]: W0226 09:42:40.588860 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Feb 26 09:42:40 crc kubenswrapper[4760]: E0226 09:42:40.589005 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Feb 26 09:42:40 crc kubenswrapper[4760]: I0226 09:42:40.608289 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7d87433f48722454e8354663a5231ad9de10b77a7b31294bdd1d334fdcc80cf2"} Feb 26 09:42:40 crc kubenswrapper[4760]: I0226 09:42:40.608346 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"58af4554c86f6bc298dd9470d0af823cd912b5226823622c48026b6fe510b965"} Feb 26 09:42:40 crc kubenswrapper[4760]: I0226 09:42:40.610319 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d4ac25c0e6c7fa54988b35e7f1345fa88424e07fce3f970b5f2ee0413f370183"} Feb 26 09:42:40 crc kubenswrapper[4760]: I0226 09:42:40.610344 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1e4c896ff7ab6dc79e3d9f7e0c2d62ce56adb7d5233f4e26cd2afda12a1dff50"} Feb 26 09:42:40 crc kubenswrapper[4760]: I0226 09:42:40.613058 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7f873bfd1bde256f3ba8b460ae2aeab0e0ec82743932e5905a251070d7b77954"} Feb 26 09:42:40 crc kubenswrapper[4760]: I0226 09:42:40.613099 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ab931c6ee89813eba42021c556459016bac7810a93a167b53e69c7b6705fc5c5"} Feb 26 09:42:40 crc kubenswrapper[4760]: I0226 09:42:40.615086 4760 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="16809a97b1d79c7fab33d2001a12e24942b1db8e5b93b6f43b755b4e0cb5f7b3" exitCode=0 Feb 26 09:42:40 crc kubenswrapper[4760]: I0226 09:42:40.615193 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"16809a97b1d79c7fab33d2001a12e24942b1db8e5b93b6f43b755b4e0cb5f7b3"} Feb 26 09:42:40 crc kubenswrapper[4760]: I0226 09:42:40.615244 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:40 crc kubenswrapper[4760]: I0226 09:42:40.616075 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:40 crc kubenswrapper[4760]: I0226 09:42:40.616107 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:40 crc kubenswrapper[4760]: I0226 09:42:40.616119 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:40 crc kubenswrapper[4760]: I0226 09:42:40.617277 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"286909951e5f2e527c26f11d9327df6eb2080ca644c90e5bfb8716b7d7951c39"} Feb 26 09:42:40 crc kubenswrapper[4760]: I0226 09:42:40.617341 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:40 crc kubenswrapper[4760]: I0226 09:42:40.618150 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:40 crc kubenswrapper[4760]: I0226 09:42:40.618193 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:40 crc kubenswrapper[4760]: I0226 09:42:40.618205 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:40 crc kubenswrapper[4760]: W0226 09:42:40.920556 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Feb 26 09:42:40 crc kubenswrapper[4760]: E0226 09:42:40.920651 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Feb 26 09:42:41 crc kubenswrapper[4760]: I0226 09:42:41.437442 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Feb 26 09:42:41 crc kubenswrapper[4760]: I0226 09:42:41.622347 4760 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9734f1ad2234647cfd28482b8bfcefe206932ac4de357c3ee53462f31f16784f" exitCode=0 Feb 26 09:42:41 crc kubenswrapper[4760]: I0226 09:42:41.622427 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9734f1ad2234647cfd28482b8bfcefe206932ac4de357c3ee53462f31f16784f"} Feb 26 09:42:41 crc kubenswrapper[4760]: I0226 09:42:41.622468 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:41 crc kubenswrapper[4760]: I0226 09:42:41.623548 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:41 crc kubenswrapper[4760]: I0226 09:42:41.623598 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:41 crc kubenswrapper[4760]: I0226 09:42:41.623612 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:41 crc kubenswrapper[4760]: I0226 09:42:41.626218 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"26ccacf58a409397f6308561b33501ec1440bcbde84421b4d501a799667bd015"} Feb 26 09:42:41 crc kubenswrapper[4760]: I0226 09:42:41.626232 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:41 crc kubenswrapper[4760]: I0226 09:42:41.627260 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:41 crc kubenswrapper[4760]: I0226 09:42:41.627311 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:41 crc kubenswrapper[4760]: I0226 09:42:41.627327 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:41 crc kubenswrapper[4760]: I0226 09:42:41.632473 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f4ea56b0946f8685a82d272815e0154713d6e2f16cd816a8f1ac0fbe24db6bc9"} Feb 26 09:42:41 crc kubenswrapper[4760]: I0226 09:42:41.632747 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:41 crc kubenswrapper[4760]: I0226 09:42:41.634131 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:41 crc kubenswrapper[4760]: I0226 09:42:41.634183 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:41 crc kubenswrapper[4760]: I0226 09:42:41.634197 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:41 crc kubenswrapper[4760]: I0226 09:42:41.639695 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ba46b7fbf5f6a13690bd5d758c97eadad3a99153e69f8539065ae32d667b3b15"} Feb 26 09:42:41 crc kubenswrapper[4760]: I0226 09:42:41.639785 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3abb0dfbcfc7e859ea45ba5daf96d064ba260017ed48b5ba126c462e023fcf92"} Feb 26 09:42:41 crc kubenswrapper[4760]: I0226 09:42:41.639804 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"41869c6aa4019c7a99928daadcc42b5e73f395a1e723ef5bb95cab3b460feaca"} Feb 26 09:42:41 crc kubenswrapper[4760]: I0226 09:42:41.639731 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:41 crc kubenswrapper[4760]: I0226 09:42:41.639854 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:41 crc kubenswrapper[4760]: I0226 09:42:41.641807 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:41 crc kubenswrapper[4760]: I0226 09:42:41.641853 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:41 crc kubenswrapper[4760]: I0226 09:42:41.641867 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:41 crc kubenswrapper[4760]: I0226 09:42:41.641969 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:41 crc kubenswrapper[4760]: I0226 09:42:41.642031 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:41 crc kubenswrapper[4760]: I0226 09:42:41.642046 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:42 crc kubenswrapper[4760]: I0226 09:42:42.314252 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 09:42:42 crc kubenswrapper[4760]: E0226 09:42:42.315254 4760 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Feb 26 09:42:42 crc kubenswrapper[4760]: I0226 09:42:42.437819 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Feb 26 09:42:42 crc kubenswrapper[4760]: I0226 09:42:42.645661 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2cee3dccc5d71be4030651ddc217158609aa7fdbd103bbc1ad5c8fc785956d05"} Feb 26 09:42:42 crc kubenswrapper[4760]: I0226 09:42:42.645696 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 09:42:42 crc kubenswrapper[4760]: I0226 09:42:42.645721 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4d3c20a4c9a7aa01e05a49d9c8c24519b15450fe21cd6dda6ce812afdad1bfa5"} Feb 26 09:42:42 crc kubenswrapper[4760]: I0226 09:42:42.645742 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"83aef37e525d52e763f3de1eec30fdf923be77878ca1eba272d8ac8b3416529c"} Feb 26 09:42:42 crc kubenswrapper[4760]: I0226 09:42:42.645747 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:42 crc kubenswrapper[4760]: I0226 09:42:42.645789 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 09:42:42 crc kubenswrapper[4760]: I0226 09:42:42.645681 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:42 crc kubenswrapper[4760]: I0226 09:42:42.645732 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:42 crc kubenswrapper[4760]: I0226 09:42:42.646847 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:42 crc kubenswrapper[4760]: I0226 09:42:42.646870 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:42 crc kubenswrapper[4760]: I0226 09:42:42.646877 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:42 crc kubenswrapper[4760]: I0226 09:42:42.647466 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:42 crc kubenswrapper[4760]: I0226 09:42:42.647483 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:42 crc kubenswrapper[4760]: I0226 09:42:42.647491 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:42 crc kubenswrapper[4760]: I0226 09:42:42.648073 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:42 crc kubenswrapper[4760]: I0226 09:42:42.648094 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:42 crc kubenswrapper[4760]: I0226 09:42:42.648102 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:42 crc kubenswrapper[4760]: E0226 09:42:42.659733 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="6.4s" Feb 26 09:42:42 crc kubenswrapper[4760]: I0226 09:42:42.975615 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:42 crc kubenswrapper[4760]: I0226 09:42:42.977358 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:42 crc kubenswrapper[4760]: I0226 09:42:42.977387 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:42 crc kubenswrapper[4760]: I0226 09:42:42.977396 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:42 crc kubenswrapper[4760]: I0226 09:42:42.977417 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 09:42:42 crc kubenswrapper[4760]: E0226 09:42:42.977741 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.107:6443: connect: connection refused" node="crc" Feb 26 09:42:43 crc kubenswrapper[4760]: I0226 09:42:43.437501 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Feb 26 09:42:43 crc kubenswrapper[4760]: E0226 09:42:43.500419 4760 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.107:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1897c29ca4f5b308 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.431831816 +0000 UTC m=+1.565777319,LastTimestamp:2026-02-26 09:42:36.431831816 +0000 UTC m=+1.565777319,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:42:43 crc kubenswrapper[4760]: I0226 09:42:43.651779 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fb5ea7c4e0bce43f94e3515a08b9048a9be1dba04a6089c5a0818efc643d254a"} Feb 26 09:42:43 crc kubenswrapper[4760]: I0226 09:42:43.651826 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ea77f4029ceb4dc9623ba4fe2da739b19cb846609362f6a6f40784d5b7b8767f"} Feb 26 09:42:43 crc kubenswrapper[4760]: I0226 09:42:43.651788 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:43 crc kubenswrapper[4760]: I0226 09:42:43.653077 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:43 crc kubenswrapper[4760]: I0226 09:42:43.653131 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:43 crc kubenswrapper[4760]: I0226 09:42:43.653143 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:43 crc kubenswrapper[4760]: I0226 09:42:43.655409 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 26 09:42:43 crc kubenswrapper[4760]: I0226 09:42:43.656711 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ba46b7fbf5f6a13690bd5d758c97eadad3a99153e69f8539065ae32d667b3b15" exitCode=255 Feb 26 09:42:43 crc kubenswrapper[4760]: I0226 09:42:43.656917 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ba46b7fbf5f6a13690bd5d758c97eadad3a99153e69f8539065ae32d667b3b15"} Feb 26 09:42:43 crc kubenswrapper[4760]: I0226 09:42:43.657088 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:43 crc kubenswrapper[4760]: I0226 09:42:43.657434 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:43 crc kubenswrapper[4760]: I0226 09:42:43.658178 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:43 crc kubenswrapper[4760]: I0226 09:42:43.658264 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:43 crc kubenswrapper[4760]: I0226 09:42:43.658327 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:43 crc kubenswrapper[4760]: I0226 09:42:43.658903 4760 scope.go:117] "RemoveContainer" containerID="ba46b7fbf5f6a13690bd5d758c97eadad3a99153e69f8539065ae32d667b3b15" Feb 26 09:42:43 crc kubenswrapper[4760]: I0226 09:42:43.659326 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:43 crc kubenswrapper[4760]: I0226 09:42:43.659401 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:43 crc kubenswrapper[4760]: I0226 09:42:43.659466 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:44 crc kubenswrapper[4760]: W0226 09:42:44.084913 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Feb 26 09:42:44 crc kubenswrapper[4760]: E0226 09:42:44.085016 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Feb 26 09:42:44 crc kubenswrapper[4760]: I0226 09:42:44.437718 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Feb 26 09:42:44 crc kubenswrapper[4760]: I0226 09:42:44.660986 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 26 09:42:44 crc kubenswrapper[4760]: I0226 09:42:44.663162 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c215c51494c2f2b9309889a88043a2fd2e27dc58f7617e9e44199264e51ff06f"} Feb 26 09:42:44 crc kubenswrapper[4760]: I0226 09:42:44.663277 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:44 crc kubenswrapper[4760]: I0226 09:42:44.663315 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 09:42:44 crc kubenswrapper[4760]: I0226 09:42:44.663355 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:44 crc kubenswrapper[4760]: I0226 09:42:44.664649 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:44 crc kubenswrapper[4760]: I0226 09:42:44.664694 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:44 crc kubenswrapper[4760]: I0226 09:42:44.664712 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:44 crc kubenswrapper[4760]: I0226 09:42:44.664717 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:44 crc kubenswrapper[4760]: I0226 09:42:44.664752 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:44 crc kubenswrapper[4760]: I0226 09:42:44.664763 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:45 crc kubenswrapper[4760]: I0226 09:42:45.089896 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 09:42:45 crc kubenswrapper[4760]: I0226 09:42:45.090099 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:45 crc kubenswrapper[4760]: I0226 09:42:45.091686 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:45 crc kubenswrapper[4760]: I0226 09:42:45.091737 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:45 crc kubenswrapper[4760]: I0226 09:42:45.091750 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:45 crc kubenswrapper[4760]: W0226 09:42:45.200000 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Feb 26 09:42:45 crc kubenswrapper[4760]: E0226 09:42:45.200142 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Feb 26 09:42:45 crc kubenswrapper[4760]: I0226 09:42:45.237266 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:42:45 crc kubenswrapper[4760]: I0226 09:42:45.666028 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 09:42:45 crc kubenswrapper[4760]: I0226 09:42:45.666085 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:45 crc kubenswrapper[4760]: I0226 09:42:45.667128 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:45 crc kubenswrapper[4760]: I0226 09:42:45.667165 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:45 crc kubenswrapper[4760]: I0226 09:42:45.667177 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:45 crc kubenswrapper[4760]: I0226 09:42:45.696960 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:42:46 crc kubenswrapper[4760]: I0226 09:42:46.202789 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:42:46 crc kubenswrapper[4760]: I0226 09:42:46.668061 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:46 crc kubenswrapper[4760]: E0226 09:42:46.668327 4760 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 09:42:46 crc kubenswrapper[4760]: I0226 09:42:46.669049 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:46 crc kubenswrapper[4760]: I0226 09:42:46.669072 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:46 crc kubenswrapper[4760]: I0226 09:42:46.669083 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:47 crc kubenswrapper[4760]: I0226 09:42:47.330546 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 09:42:47 crc kubenswrapper[4760]: I0226 09:42:47.330813 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:47 crc kubenswrapper[4760]: I0226 09:42:47.333662 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:47 crc kubenswrapper[4760]: I0226 09:42:47.333754 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:47 crc kubenswrapper[4760]: I0226 09:42:47.333767 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:47 crc kubenswrapper[4760]: I0226 09:42:47.670754 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:47 crc kubenswrapper[4760]: I0226 09:42:47.671651 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:47 crc kubenswrapper[4760]: I0226 09:42:47.671705 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:47 crc kubenswrapper[4760]: I0226 09:42:47.671721 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:47 crc kubenswrapper[4760]: I0226 09:42:47.895158 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 26 09:42:47 crc kubenswrapper[4760]: I0226 09:42:47.895434 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:47 crc kubenswrapper[4760]: I0226 09:42:47.896858 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:47 crc kubenswrapper[4760]: I0226 09:42:47.896903 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:47 crc kubenswrapper[4760]: I0226 09:42:47.896921 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:47 crc kubenswrapper[4760]: I0226 09:42:47.930940 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 09:42:47 crc kubenswrapper[4760]: I0226 09:42:47.931148 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:47 crc kubenswrapper[4760]: I0226 09:42:47.932305 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:47 crc kubenswrapper[4760]: I0226 09:42:47.932351 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:47 crc kubenswrapper[4760]: I0226 09:42:47.932361 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:48 crc kubenswrapper[4760]: I0226 09:42:48.090673 4760 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:42:48 crc kubenswrapper[4760]: I0226 09:42:48.090758 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:42:49 crc kubenswrapper[4760]: I0226 09:42:49.377858 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:49 crc kubenswrapper[4760]: I0226 09:42:49.379095 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:49 crc kubenswrapper[4760]: I0226 09:42:49.379126 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:49 crc kubenswrapper[4760]: I0226 09:42:49.379136 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:49 crc kubenswrapper[4760]: I0226 09:42:49.379161 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 09:42:51 crc kubenswrapper[4760]: I0226 09:42:51.056089 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 09:42:51 crc kubenswrapper[4760]: I0226 09:42:51.370151 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 26 09:42:51 crc kubenswrapper[4760]: I0226 09:42:51.370348 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:51 crc kubenswrapper[4760]: I0226 09:42:51.371453 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:51 crc kubenswrapper[4760]: I0226 09:42:51.371495 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:51 crc kubenswrapper[4760]: I0226 09:42:51.371506 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:51 crc kubenswrapper[4760]: I0226 09:42:51.376884 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 09:42:51 crc kubenswrapper[4760]: I0226 09:42:51.377065 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:51 crc kubenswrapper[4760]: I0226 09:42:51.377980 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:51 crc kubenswrapper[4760]: I0226 09:42:51.378017 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:51 crc kubenswrapper[4760]: I0226 09:42:51.378030 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:51 crc kubenswrapper[4760]: I0226 09:42:51.382605 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 09:42:51 crc kubenswrapper[4760]: I0226 09:42:51.678560 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:51 crc kubenswrapper[4760]: I0226 09:42:51.679522 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:51 crc kubenswrapper[4760]: I0226 09:42:51.679639 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:51 crc kubenswrapper[4760]: I0226 09:42:51.679664 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:51 crc kubenswrapper[4760]: I0226 09:42:51.682095 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 09:42:52 crc kubenswrapper[4760]: I0226 09:42:52.680859 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:52 crc kubenswrapper[4760]: I0226 09:42:52.681964 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:52 crc kubenswrapper[4760]: I0226 09:42:52.681997 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:52 crc kubenswrapper[4760]: I0226 09:42:52.682010 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:53 crc kubenswrapper[4760]: I0226 09:42:53.343892 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:42:53Z is after 2026-02-23T05:33:13Z Feb 26 09:42:53 crc kubenswrapper[4760]: W0226 09:42:53.346458 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:42:53Z is after 2026-02-23T05:33:13Z Feb 26 09:42:53 crc kubenswrapper[4760]: E0226 09:42:53.346529 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:42:53Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 09:42:53 crc kubenswrapper[4760]: W0226 09:42:53.346996 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:42:53Z is after 2026-02-23T05:33:13Z Feb 26 09:42:53 crc kubenswrapper[4760]: E0226 09:42:53.347064 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:42:53Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 09:42:53 crc kubenswrapper[4760]: I0226 09:42:53.352762 4760 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 26 09:42:53 crc kubenswrapper[4760]: I0226 09:42:53.352815 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 26 09:42:53 crc kubenswrapper[4760]: W0226 09:42:53.352899 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:42:53Z is after 2026-02-23T05:33:13Z Feb 26 09:42:53 crc kubenswrapper[4760]: E0226 09:42:53.352968 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:42:53Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 09:42:53 crc kubenswrapper[4760]: E0226 09:42:53.353340 4760 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:42:53Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 09:42:53 crc kubenswrapper[4760]: E0226 09:42:53.353667 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:42:53Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 26 09:42:53 crc kubenswrapper[4760]: E0226 09:42:53.355626 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:42:53Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 09:42:53 crc kubenswrapper[4760]: I0226 09:42:53.358196 4760 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 26 09:42:53 crc kubenswrapper[4760]: I0226 09:42:53.358244 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 26 09:42:53 crc kubenswrapper[4760]: I0226 09:42:53.442176 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:42:53Z is after 2026-02-23T05:33:13Z Feb 26 09:42:53 crc kubenswrapper[4760]: E0226 09:42:53.504009 4760 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:42:53Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1897c29ca4f5b308 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.431831816 +0000 UTC m=+1.565777319,LastTimestamp:2026-02-26 09:42:36.431831816 +0000 UTC m=+1.565777319,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:42:53 crc kubenswrapper[4760]: I0226 09:42:53.683767 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 26 09:42:53 crc kubenswrapper[4760]: I0226 09:42:53.684176 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 26 09:42:53 crc kubenswrapper[4760]: I0226 09:42:53.685693 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c215c51494c2f2b9309889a88043a2fd2e27dc58f7617e9e44199264e51ff06f" exitCode=255 Feb 26 09:42:53 crc kubenswrapper[4760]: I0226 09:42:53.685736 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c215c51494c2f2b9309889a88043a2fd2e27dc58f7617e9e44199264e51ff06f"} Feb 26 09:42:53 crc kubenswrapper[4760]: I0226 09:42:53.685800 4760 scope.go:117] "RemoveContainer" containerID="ba46b7fbf5f6a13690bd5d758c97eadad3a99153e69f8539065ae32d667b3b15" Feb 26 09:42:53 crc kubenswrapper[4760]: I0226 09:42:53.685891 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:53 crc kubenswrapper[4760]: I0226 09:42:53.686797 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:53 crc kubenswrapper[4760]: I0226 09:42:53.686828 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:53 crc kubenswrapper[4760]: I0226 09:42:53.686840 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:53 crc kubenswrapper[4760]: I0226 09:42:53.687335 4760 scope.go:117] "RemoveContainer" containerID="c215c51494c2f2b9309889a88043a2fd2e27dc58f7617e9e44199264e51ff06f" Feb 26 09:42:53 crc kubenswrapper[4760]: E0226 09:42:53.687521 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 09:42:54 crc kubenswrapper[4760]: I0226 09:42:54.439399 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:42:54Z is after 2026-02-23T05:33:13Z Feb 26 09:42:54 crc kubenswrapper[4760]: I0226 09:42:54.691053 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 26 09:42:55 crc kubenswrapper[4760]: W0226 09:42:55.324657 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:42:55Z is after 2026-02-23T05:33:13Z Feb 26 09:42:55 crc kubenswrapper[4760]: E0226 09:42:55.324750 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:42:55Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 09:42:55 crc kubenswrapper[4760]: I0226 09:42:55.443961 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:42:55Z is after 2026-02-23T05:33:13Z Feb 26 09:42:55 crc kubenswrapper[4760]: I0226 09:42:55.704481 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:42:55 crc kubenswrapper[4760]: I0226 09:42:55.704699 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:55 crc kubenswrapper[4760]: I0226 09:42:55.705977 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:55 crc kubenswrapper[4760]: I0226 09:42:55.706043 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:55 crc kubenswrapper[4760]: I0226 09:42:55.706063 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:55 crc kubenswrapper[4760]: I0226 09:42:55.707060 4760 scope.go:117] "RemoveContainer" containerID="c215c51494c2f2b9309889a88043a2fd2e27dc58f7617e9e44199264e51ff06f" Feb 26 09:42:55 crc kubenswrapper[4760]: E0226 09:42:55.707404 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 09:42:55 crc kubenswrapper[4760]: I0226 09:42:55.708720 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:42:56 crc kubenswrapper[4760]: I0226 09:42:56.443120 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:42:56Z is after 2026-02-23T05:33:13Z Feb 26 09:42:56 crc kubenswrapper[4760]: E0226 09:42:56.668930 4760 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 09:42:56 crc kubenswrapper[4760]: I0226 09:42:56.697473 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:42:56 crc kubenswrapper[4760]: I0226 09:42:56.698618 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:42:56 crc kubenswrapper[4760]: I0226 09:42:56.698709 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:42:56 crc kubenswrapper[4760]: I0226 09:42:56.698735 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:42:56 crc kubenswrapper[4760]: I0226 09:42:56.699763 4760 scope.go:117] "RemoveContainer" containerID="c215c51494c2f2b9309889a88043a2fd2e27dc58f7617e9e44199264e51ff06f" Feb 26 09:42:56 crc kubenswrapper[4760]: E0226 09:42:56.700107 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 09:42:57 crc kubenswrapper[4760]: I0226 09:42:57.441673 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:42:57Z is after 2026-02-23T05:33:13Z Feb 26 09:42:58 crc kubenswrapper[4760]: I0226 09:42:58.091013 4760 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:42:58 crc kubenswrapper[4760]: I0226 09:42:58.091119 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:42:58 crc kubenswrapper[4760]: I0226 09:42:58.442492 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:42:58Z is after 2026-02-23T05:33:13Z Feb 26 09:42:59 crc kubenswrapper[4760]: I0226 09:42:59.442853 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:42:59Z is after 2026-02-23T05:33:13Z Feb 26 09:43:00 crc kubenswrapper[4760]: E0226 09:43:00.356631 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:00Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 26 09:43:00 crc kubenswrapper[4760]: I0226 09:43:00.356733 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:43:00 crc kubenswrapper[4760]: I0226 09:43:00.358344 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:00 crc kubenswrapper[4760]: I0226 09:43:00.358388 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:00 crc kubenswrapper[4760]: I0226 09:43:00.358405 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:00 crc kubenswrapper[4760]: I0226 09:43:00.358438 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 09:43:00 crc kubenswrapper[4760]: E0226 09:43:00.361830 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:00Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 09:43:00 crc kubenswrapper[4760]: I0226 09:43:00.439614 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:00Z is after 2026-02-23T05:33:13Z Feb 26 09:43:00 crc kubenswrapper[4760]: W0226 09:43:00.921889 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:00Z is after 2026-02-23T05:33:13Z Feb 26 09:43:00 crc kubenswrapper[4760]: E0226 09:43:00.922004 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:00Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 09:43:01 crc kubenswrapper[4760]: I0226 09:43:01.421276 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 26 09:43:01 crc kubenswrapper[4760]: I0226 09:43:01.421522 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:43:01 crc kubenswrapper[4760]: I0226 09:43:01.423181 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:01 crc kubenswrapper[4760]: I0226 09:43:01.423244 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:01 crc kubenswrapper[4760]: I0226 09:43:01.423267 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:01 crc kubenswrapper[4760]: I0226 09:43:01.436777 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 26 09:43:01 crc kubenswrapper[4760]: I0226 09:43:01.440696 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:01Z is after 2026-02-23T05:33:13Z Feb 26 09:43:01 crc kubenswrapper[4760]: I0226 09:43:01.710974 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:43:01 crc kubenswrapper[4760]: I0226 09:43:01.712145 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:01 crc kubenswrapper[4760]: I0226 09:43:01.712187 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:01 crc kubenswrapper[4760]: I0226 09:43:01.712202 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:02 crc kubenswrapper[4760]: W0226 09:43:02.131528 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:02Z is after 2026-02-23T05:33:13Z Feb 26 09:43:02 crc kubenswrapper[4760]: E0226 09:43:02.131629 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:02Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 09:43:02 crc kubenswrapper[4760]: I0226 09:43:02.441758 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:02Z is after 2026-02-23T05:33:13Z Feb 26 09:43:02 crc kubenswrapper[4760]: I0226 09:43:02.597226 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:43:02 crc kubenswrapper[4760]: I0226 09:43:02.597514 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:43:02 crc kubenswrapper[4760]: I0226 09:43:02.599267 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:02 crc kubenswrapper[4760]: I0226 09:43:02.599318 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:02 crc kubenswrapper[4760]: I0226 09:43:02.599331 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:02 crc kubenswrapper[4760]: I0226 09:43:02.600230 4760 scope.go:117] "RemoveContainer" containerID="c215c51494c2f2b9309889a88043a2fd2e27dc58f7617e9e44199264e51ff06f" Feb 26 09:43:02 crc kubenswrapper[4760]: E0226 09:43:02.600442 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 09:43:03 crc kubenswrapper[4760]: I0226 09:43:03.442738 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:03Z is after 2026-02-23T05:33:13Z Feb 26 09:43:03 crc kubenswrapper[4760]: E0226 09:43:03.507895 4760 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:03Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1897c29ca4f5b308 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.431831816 +0000 UTC m=+1.565777319,LastTimestamp:2026-02-26 09:42:36.431831816 +0000 UTC m=+1.565777319,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:04 crc kubenswrapper[4760]: I0226 09:43:04.441024 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:04Z is after 2026-02-23T05:33:13Z Feb 26 09:43:05 crc kubenswrapper[4760]: I0226 09:43:05.441805 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:05Z is after 2026-02-23T05:33:13Z Feb 26 09:43:06 crc kubenswrapper[4760]: I0226 09:43:06.440397 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:06Z is after 2026-02-23T05:33:13Z Feb 26 09:43:06 crc kubenswrapper[4760]: E0226 09:43:06.669199 4760 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 09:43:07 crc kubenswrapper[4760]: E0226 09:43:07.361353 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:07Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 26 09:43:07 crc kubenswrapper[4760]: I0226 09:43:07.362598 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:43:07 crc kubenswrapper[4760]: I0226 09:43:07.364026 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:07 crc kubenswrapper[4760]: I0226 09:43:07.364093 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:07 crc kubenswrapper[4760]: I0226 09:43:07.364105 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:07 crc kubenswrapper[4760]: I0226 09:43:07.364131 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 09:43:07 crc kubenswrapper[4760]: E0226 09:43:07.367231 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:07Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 09:43:07 crc kubenswrapper[4760]: I0226 09:43:07.440464 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:07Z is after 2026-02-23T05:33:13Z Feb 26 09:43:08 crc kubenswrapper[4760]: I0226 09:43:08.090741 4760 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:43:08 crc kubenswrapper[4760]: I0226 09:43:08.090857 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 09:43:08 crc kubenswrapper[4760]: I0226 09:43:08.090976 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 09:43:08 crc kubenswrapper[4760]: I0226 09:43:08.091163 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:43:08 crc kubenswrapper[4760]: I0226 09:43:08.092636 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:08 crc kubenswrapper[4760]: I0226 09:43:08.092689 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:08 crc kubenswrapper[4760]: I0226 09:43:08.092701 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:08 crc kubenswrapper[4760]: I0226 09:43:08.093412 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"58af4554c86f6bc298dd9470d0af823cd912b5226823622c48026b6fe510b965"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 26 09:43:08 crc kubenswrapper[4760]: I0226 09:43:08.093626 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://58af4554c86f6bc298dd9470d0af823cd912b5226823622c48026b6fe510b965" gracePeriod=30 Feb 26 09:43:08 crc kubenswrapper[4760]: I0226 09:43:08.440035 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:08Z is after 2026-02-23T05:33:13Z Feb 26 09:43:08 crc kubenswrapper[4760]: I0226 09:43:08.735180 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 26 09:43:08 crc kubenswrapper[4760]: I0226 09:43:08.735602 4760 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="58af4554c86f6bc298dd9470d0af823cd912b5226823622c48026b6fe510b965" exitCode=255 Feb 26 09:43:08 crc kubenswrapper[4760]: I0226 09:43:08.735652 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"58af4554c86f6bc298dd9470d0af823cd912b5226823622c48026b6fe510b965"} Feb 26 09:43:08 crc kubenswrapper[4760]: I0226 09:43:08.735684 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b9805e2c9966c0f94ed7b9e78915081c4d02c30ffaebe251b4152a00bd1ccc96"} Feb 26 09:43:08 crc kubenswrapper[4760]: I0226 09:43:08.735776 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:43:08 crc kubenswrapper[4760]: I0226 09:43:08.736530 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:08 crc kubenswrapper[4760]: I0226 09:43:08.736562 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:08 crc kubenswrapper[4760]: I0226 09:43:08.736576 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:09 crc kubenswrapper[4760]: I0226 09:43:09.441041 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:09Z is after 2026-02-23T05:33:13Z Feb 26 09:43:10 crc kubenswrapper[4760]: I0226 09:43:10.443054 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:10Z is after 2026-02-23T05:33:13Z Feb 26 09:43:10 crc kubenswrapper[4760]: W0226 09:43:10.615230 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:10Z is after 2026-02-23T05:33:13Z Feb 26 09:43:10 crc kubenswrapper[4760]: E0226 09:43:10.615338 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:10Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 09:43:10 crc kubenswrapper[4760]: I0226 09:43:10.717558 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 09:43:10 crc kubenswrapper[4760]: E0226 09:43:10.721573 4760 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:10Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 09:43:10 crc kubenswrapper[4760]: E0226 09:43:10.722816 4760 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Feb 26 09:43:11 crc kubenswrapper[4760]: I0226 09:43:11.439936 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:11Z is after 2026-02-23T05:33:13Z Feb 26 09:43:12 crc kubenswrapper[4760]: I0226 09:43:12.439945 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:12Z is after 2026-02-23T05:33:13Z Feb 26 09:43:13 crc kubenswrapper[4760]: I0226 09:43:13.442442 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:13Z is after 2026-02-23T05:33:13Z Feb 26 09:43:13 crc kubenswrapper[4760]: E0226 09:43:13.511753 4760 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:13Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1897c29ca4f5b308 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.431831816 +0000 UTC m=+1.565777319,LastTimestamp:2026-02-26 09:42:36.431831816 +0000 UTC m=+1.565777319,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:14 crc kubenswrapper[4760]: E0226 09:43:14.365521 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:14Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 26 09:43:14 crc kubenswrapper[4760]: I0226 09:43:14.367644 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:43:14 crc kubenswrapper[4760]: I0226 09:43:14.368804 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:14 crc kubenswrapper[4760]: I0226 09:43:14.368837 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:14 crc kubenswrapper[4760]: I0226 09:43:14.368846 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:14 crc kubenswrapper[4760]: I0226 09:43:14.368868 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 09:43:14 crc kubenswrapper[4760]: E0226 09:43:14.371815 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:14Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 09:43:14 crc kubenswrapper[4760]: I0226 09:43:14.439041 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:14Z is after 2026-02-23T05:33:13Z Feb 26 09:43:14 crc kubenswrapper[4760]: W0226 09:43:14.906834 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:14Z is after 2026-02-23T05:33:13Z Feb 26 09:43:14 crc kubenswrapper[4760]: E0226 09:43:14.906953 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:14Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 09:43:15 crc kubenswrapper[4760]: I0226 09:43:15.090671 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 09:43:15 crc kubenswrapper[4760]: I0226 09:43:15.090961 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:43:15 crc kubenswrapper[4760]: I0226 09:43:15.092917 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:15 crc kubenswrapper[4760]: I0226 09:43:15.092989 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:15 crc kubenswrapper[4760]: I0226 09:43:15.093001 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:15 crc kubenswrapper[4760]: I0226 09:43:15.441760 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:15Z is after 2026-02-23T05:33:13Z Feb 26 09:43:16 crc kubenswrapper[4760]: I0226 09:43:16.440125 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:16Z is after 2026-02-23T05:33:13Z Feb 26 09:43:16 crc kubenswrapper[4760]: I0226 09:43:16.576084 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:43:16 crc kubenswrapper[4760]: I0226 09:43:16.577751 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:16 crc kubenswrapper[4760]: I0226 09:43:16.577789 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:16 crc kubenswrapper[4760]: I0226 09:43:16.577800 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:16 crc kubenswrapper[4760]: I0226 09:43:16.578442 4760 scope.go:117] "RemoveContainer" containerID="c215c51494c2f2b9309889a88043a2fd2e27dc58f7617e9e44199264e51ff06f" Feb 26 09:43:16 crc kubenswrapper[4760]: E0226 09:43:16.669360 4760 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 09:43:17 crc kubenswrapper[4760]: I0226 09:43:17.441068 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:17Z is after 2026-02-23T05:33:13Z Feb 26 09:43:17 crc kubenswrapper[4760]: I0226 09:43:17.763004 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 26 09:43:17 crc kubenswrapper[4760]: I0226 09:43:17.763880 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 26 09:43:17 crc kubenswrapper[4760]: I0226 09:43:17.765735 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="239c06375c0fd99aebf7862818eecfb19076b926203ac1dcd7849726fb12f82f" exitCode=255 Feb 26 09:43:17 crc kubenswrapper[4760]: I0226 09:43:17.765806 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"239c06375c0fd99aebf7862818eecfb19076b926203ac1dcd7849726fb12f82f"} Feb 26 09:43:17 crc kubenswrapper[4760]: I0226 09:43:17.765883 4760 scope.go:117] "RemoveContainer" containerID="c215c51494c2f2b9309889a88043a2fd2e27dc58f7617e9e44199264e51ff06f" Feb 26 09:43:17 crc kubenswrapper[4760]: I0226 09:43:17.766037 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:43:17 crc kubenswrapper[4760]: I0226 09:43:17.767146 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:17 crc kubenswrapper[4760]: I0226 09:43:17.767177 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:17 crc kubenswrapper[4760]: I0226 09:43:17.767186 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:17 crc kubenswrapper[4760]: I0226 09:43:17.767777 4760 scope.go:117] "RemoveContainer" containerID="239c06375c0fd99aebf7862818eecfb19076b926203ac1dcd7849726fb12f82f" Feb 26 09:43:17 crc kubenswrapper[4760]: E0226 09:43:17.768015 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 09:43:17 crc kubenswrapper[4760]: I0226 09:43:17.931379 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 09:43:17 crc kubenswrapper[4760]: I0226 09:43:17.931593 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:43:17 crc kubenswrapper[4760]: I0226 09:43:17.932732 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:17 crc kubenswrapper[4760]: I0226 09:43:17.932766 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:17 crc kubenswrapper[4760]: I0226 09:43:17.932780 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:18 crc kubenswrapper[4760]: I0226 09:43:18.090823 4760 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:43:18 crc kubenswrapper[4760]: I0226 09:43:18.090937 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:43:18 crc kubenswrapper[4760]: I0226 09:43:18.440187 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:18Z is after 2026-02-23T05:33:13Z Feb 26 09:43:18 crc kubenswrapper[4760]: W0226 09:43:18.534658 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:18Z is after 2026-02-23T05:33:13Z Feb 26 09:43:18 crc kubenswrapper[4760]: E0226 09:43:18.534734 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:18Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 09:43:18 crc kubenswrapper[4760]: I0226 09:43:18.769352 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 26 09:43:18 crc kubenswrapper[4760]: W0226 09:43:18.853557 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:18Z is after 2026-02-23T05:33:13Z Feb 26 09:43:18 crc kubenswrapper[4760]: E0226 09:43:18.853660 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:18Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 09:43:19 crc kubenswrapper[4760]: I0226 09:43:19.441046 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:19Z is after 2026-02-23T05:33:13Z Feb 26 09:43:20 crc kubenswrapper[4760]: I0226 09:43:20.440633 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:20Z is after 2026-02-23T05:33:13Z Feb 26 09:43:21 crc kubenswrapper[4760]: E0226 09:43:21.370066 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:21Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 26 09:43:21 crc kubenswrapper[4760]: I0226 09:43:21.372344 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:43:21 crc kubenswrapper[4760]: I0226 09:43:21.375981 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:21 crc kubenswrapper[4760]: I0226 09:43:21.376021 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:21 crc kubenswrapper[4760]: I0226 09:43:21.376030 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:21 crc kubenswrapper[4760]: I0226 09:43:21.376054 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 09:43:21 crc kubenswrapper[4760]: E0226 09:43:21.378467 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:21Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 09:43:21 crc kubenswrapper[4760]: I0226 09:43:21.441121 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:21Z is after 2026-02-23T05:33:13Z Feb 26 09:43:22 crc kubenswrapper[4760]: I0226 09:43:22.440276 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:43:22Z is after 2026-02-23T05:33:13Z Feb 26 09:43:22 crc kubenswrapper[4760]: I0226 09:43:22.596732 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:43:22 crc kubenswrapper[4760]: I0226 09:43:22.596857 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:43:22 crc kubenswrapper[4760]: I0226 09:43:22.597759 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:22 crc kubenswrapper[4760]: I0226 09:43:22.597791 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:22 crc kubenswrapper[4760]: I0226 09:43:22.597801 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:22 crc kubenswrapper[4760]: I0226 09:43:22.598243 4760 scope.go:117] "RemoveContainer" containerID="239c06375c0fd99aebf7862818eecfb19076b926203ac1dcd7849726fb12f82f" Feb 26 09:43:22 crc kubenswrapper[4760]: E0226 09:43:22.598396 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 09:43:23 crc kubenswrapper[4760]: I0226 09:43:23.443218 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.517676 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c29ca4f5b308 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.431831816 +0000 UTC m=+1.565777319,LastTimestamp:2026-02-26 09:42:36.431831816 +0000 UTC m=+1.565777319,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.521767 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c29ca89c1f80 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.493070208 +0000 UTC m=+1.627015701,LastTimestamp:2026-02-26 09:42:36.493070208 +0000 UTC m=+1.627015701,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.527177 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c29ca89c8971 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.493097329 +0000 UTC m=+1.627042822,LastTimestamp:2026-02-26 09:42:36.493097329 +0000 UTC m=+1.627042822,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.532038 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c29ca89cbb35 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.493110069 +0000 UTC m=+1.627055562,LastTimestamp:2026-02-26 09:42:36.493110069 +0000 UTC m=+1.627055562,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.536985 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c29cb3641d3c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.673948988 +0000 UTC m=+1.807894521,LastTimestamp:2026-02-26 09:42:36.673948988 +0000 UTC m=+1.807894521,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.543265 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c29ca89c1f80\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c29ca89c1f80 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.493070208 +0000 UTC m=+1.627015701,LastTimestamp:2026-02-26 09:42:36.677682546 +0000 UTC m=+1.811628039,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.547304 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c29ca89c8971\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c29ca89c8971 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.493097329 +0000 UTC m=+1.627042822,LastTimestamp:2026-02-26 09:42:36.677695116 +0000 UTC m=+1.811640609,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.552116 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c29ca89cbb35\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c29ca89cbb35 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.493110069 +0000 UTC m=+1.627055562,LastTimestamp:2026-02-26 09:42:36.677703296 +0000 UTC m=+1.811648789,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.557301 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c29ca89c1f80\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c29ca89c1f80 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.493070208 +0000 UTC m=+1.627015701,LastTimestamp:2026-02-26 09:42:36.678833889 +0000 UTC m=+1.812779422,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.561916 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c29ca89c8971\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c29ca89c8971 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.493097329 +0000 UTC m=+1.627042822,LastTimestamp:2026-02-26 09:42:36.67887629 +0000 UTC m=+1.812821813,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.566691 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c29ca89cbb35\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c29ca89cbb35 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.493110069 +0000 UTC m=+1.627055562,LastTimestamp:2026-02-26 09:42:36.678899241 +0000 UTC m=+1.812844774,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.571523 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c29ca89c1f80\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c29ca89c1f80 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.493070208 +0000 UTC m=+1.627015701,LastTimestamp:2026-02-26 09:42:36.67953958 +0000 UTC m=+1.813485073,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.576552 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c29ca89c8971\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c29ca89c8971 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.493097329 +0000 UTC m=+1.627042822,LastTimestamp:2026-02-26 09:42:36.679678654 +0000 UTC m=+1.813624147,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.581396 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c29ca89cbb35\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c29ca89cbb35 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.493110069 +0000 UTC m=+1.627055562,LastTimestamp:2026-02-26 09:42:36.679689674 +0000 UTC m=+1.813635157,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.585871 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c29ca89c1f80\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c29ca89c1f80 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.493070208 +0000 UTC m=+1.627015701,LastTimestamp:2026-02-26 09:42:36.680407605 +0000 UTC m=+1.814353098,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.590053 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c29ca89c8971\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c29ca89c8971 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.493097329 +0000 UTC m=+1.627042822,LastTimestamp:2026-02-26 09:42:36.680425085 +0000 UTC m=+1.814370578,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.594448 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c29ca89cbb35\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c29ca89cbb35 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.493110069 +0000 UTC m=+1.627055562,LastTimestamp:2026-02-26 09:42:36.680432895 +0000 UTC m=+1.814378388,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.599695 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c29ca89c1f80\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c29ca89c1f80 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.493070208 +0000 UTC m=+1.627015701,LastTimestamp:2026-02-26 09:42:36.680889809 +0000 UTC m=+1.814835302,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.604656 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c29ca89c8971\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c29ca89c8971 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.493097329 +0000 UTC m=+1.627042822,LastTimestamp:2026-02-26 09:42:36.68092188 +0000 UTC m=+1.814867373,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.609912 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c29ca89cbb35\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c29ca89cbb35 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.493110069 +0000 UTC m=+1.627055562,LastTimestamp:2026-02-26 09:42:36.68093523 +0000 UTC m=+1.814880713,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.614346 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c29ca89c1f80\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c29ca89c1f80 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.493070208 +0000 UTC m=+1.627015701,LastTimestamp:2026-02-26 09:42:36.681423524 +0000 UTC m=+1.815369057,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.618904 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c29ca89c8971\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c29ca89c8971 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.493097329 +0000 UTC m=+1.627042822,LastTimestamp:2026-02-26 09:42:36.681468525 +0000 UTC m=+1.815414068,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.624006 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c29ca89cbb35\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c29ca89cbb35 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.493110069 +0000 UTC m=+1.627055562,LastTimestamp:2026-02-26 09:42:36.681490996 +0000 UTC m=+1.815436529,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.630370 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c29ca89c1f80\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c29ca89c1f80 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.493070208 +0000 UTC m=+1.627015701,LastTimestamp:2026-02-26 09:42:36.681670451 +0000 UTC m=+1.815615944,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.635261 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897c29ca89c8971\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897c29ca89c8971 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:36.493097329 +0000 UTC m=+1.627042822,LastTimestamp:2026-02-26 09:42:36.681682781 +0000 UTC m=+1.815628274,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.641202 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c29ccff58263 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:37.153239651 +0000 UTC m=+2.287185154,LastTimestamp:2026-02-26 09:42:37.153239651 +0000 UTC m=+2.287185154,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.646277 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c29ccff603ce openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:37.153272782 +0000 UTC m=+2.287218275,LastTimestamp:2026-02-26 09:42:37.153272782 +0000 UTC m=+2.287218275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.648934 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c29ccffcfa29 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:37.153729065 +0000 UTC m=+2.287674558,LastTimestamp:2026-02-26 09:42:37.153729065 +0000 UTC m=+2.287674558,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.650064 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897c29cd065dae8 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:37.160602344 +0000 UTC m=+2.294547847,LastTimestamp:2026-02-26 09:42:37.160602344 +0000 UTC m=+2.294547847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.654244 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c29cd0678c97 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:37.160713367 +0000 UTC m=+2.294658860,LastTimestamp:2026-02-26 09:42:37.160713367 +0000 UTC m=+2.294658860,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.658769 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c29d504524b0 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:39.305942192 +0000 UTC m=+4.439887685,LastTimestamp:2026-02-26 09:42:39.305942192 +0000 UTC m=+4.439887685,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.664432 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c29d51af937e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:39.32969459 +0000 UTC m=+4.463640083,LastTimestamp:2026-02-26 09:42:39.32969459 +0000 UTC m=+4.463640083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.668487 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c29d528bfc36 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:39.344139318 +0000 UTC m=+4.478084801,LastTimestamp:2026-02-26 09:42:39.344139318 +0000 UTC m=+4.478084801,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.671991 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897c29d52b3ecf6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:39.346756854 +0000 UTC m=+4.480702347,LastTimestamp:2026-02-26 09:42:39.346756854 +0000 UTC m=+4.480702347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.675995 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c29d52b9baa1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:39.347137185 +0000 UTC m=+4.481082678,LastTimestamp:2026-02-26 09:42:39.347137185 +0000 UTC m=+4.481082678,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.679309 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c29d52fa45cd openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:39.351367117 +0000 UTC m=+4.485312610,LastTimestamp:2026-02-26 09:42:39.351367117 +0000 UTC m=+4.485312610,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.685009 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c29d5332275f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:39.355029343 +0000 UTC m=+4.488974836,LastTimestamp:2026-02-26 09:42:39.355029343 +0000 UTC m=+4.488974836,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.690172 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c29d5384ea5a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:39.36045321 +0000 UTC m=+4.494398703,LastTimestamp:2026-02-26 09:42:39.36045321 +0000 UTC m=+4.494398703,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.695230 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c29d539be2ea openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:39.361958634 +0000 UTC m=+4.495904127,LastTimestamp:2026-02-26 09:42:39.361958634 +0000 UTC m=+4.495904127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.698825 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c29d54056c42 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:39.368875074 +0000 UTC m=+4.502820567,LastTimestamp:2026-02-26 09:42:39.368875074 +0000 UTC m=+4.502820567,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.702385 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897c29d5427f0eb openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:39.371137259 +0000 UTC m=+4.505082752,LastTimestamp:2026-02-26 09:42:39.371137259 +0000 UTC m=+4.505082752,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.706148 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c29d61ab2f56 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:39.597842262 +0000 UTC m=+4.731787765,LastTimestamp:2026-02-26 09:42:39.597842262 +0000 UTC m=+4.731787765,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.711398 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c29d61d2ca89 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:39.600437897 +0000 UTC m=+4.734383390,LastTimestamp:2026-02-26 09:42:39.600437897 +0000 UTC m=+4.734383390,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.714975 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c29d61f7803a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:39.602843706 +0000 UTC m=+4.736789189,LastTimestamp:2026-02-26 09:42:39.602843706 +0000 UTC m=+4.736789189,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.718206 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897c29d6212fa07 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:39.604644359 +0000 UTC m=+4.738589852,LastTimestamp:2026-02-26 09:42:39.604644359 +0000 UTC m=+4.738589852,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.723761 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c29d77214fd4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:39.957905364 +0000 UTC m=+5.091850857,LastTimestamp:2026-02-26 09:42:39.957905364 +0000 UTC m=+5.091850857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.728838 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c29d777af84a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:39.963781194 +0000 UTC m=+5.097726707,LastTimestamp:2026-02-26 09:42:39.963781194 +0000 UTC m=+5.097726707,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.732360 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c29d777b520f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:39.963804175 +0000 UTC m=+5.097749708,LastTimestamp:2026-02-26 09:42:39.963804175 +0000 UTC m=+5.097749708,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.736853 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897c29d7780f043 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:39.964172355 +0000 UTC m=+5.098117848,LastTimestamp:2026-02-26 09:42:39.964172355 +0000 UTC m=+5.098117848,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.740829 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c29d77930080 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:39.96535616 +0000 UTC m=+5.099301693,LastTimestamp:2026-02-26 09:42:39.96535616 +0000 UTC m=+5.099301693,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.747320 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c29d7bb6ba80 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:40.0348064 +0000 UTC m=+5.168751893,LastTimestamp:2026-02-26 09:42:40.0348064 +0000 UTC m=+5.168751893,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.751237 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c29d7bc9354d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:40.036017485 +0000 UTC m=+5.169962978,LastTimestamp:2026-02-26 09:42:40.036017485 +0000 UTC m=+5.169962978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.752524 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c29d7ca868a6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:40.050645158 +0000 UTC m=+5.184590691,LastTimestamp:2026-02-26 09:42:40.050645158 +0000 UTC m=+5.184590691,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.755290 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c29d7cbb12d6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:40.051868374 +0000 UTC m=+5.185813907,LastTimestamp:2026-02-26 09:42:40.051868374 +0000 UTC m=+5.185813907,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.759183 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c29d7cf3708d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:40.055562381 +0000 UTC m=+5.189507874,LastTimestamp:2026-02-26 09:42:40.055562381 +0000 UTC m=+5.189507874,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.765369 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c29d7cf5293a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:40.055675194 +0000 UTC m=+5.189620697,LastTimestamp:2026-02-26 09:42:40.055675194 +0000 UTC m=+5.189620697,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.769920 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c29d7d08045c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:40.05691094 +0000 UTC m=+5.190856433,LastTimestamp:2026-02-26 09:42:40.05691094 +0000 UTC m=+5.190856433,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.774522 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897c29d7feccc6b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:40.105458795 +0000 UTC m=+5.239404308,LastTimestamp:2026-02-26 09:42:40.105458795 +0000 UTC m=+5.239404308,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.778518 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c29d8cd14d5e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:40.321760606 +0000 UTC m=+5.455706099,LastTimestamp:2026-02-26 09:42:40.321760606 +0000 UTC m=+5.455706099,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.782912 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c29d90ee70cc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:40.390779084 +0000 UTC m=+5.524724577,LastTimestamp:2026-02-26 09:42:40.390779084 +0000 UTC m=+5.524724577,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.787230 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c29d947a7f6d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:40.450289517 +0000 UTC m=+5.584235040,LastTimestamp:2026-02-26 09:42:40.450289517 +0000 UTC m=+5.584235040,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.793336 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c29d96d634be openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:40.489854142 +0000 UTC m=+5.623799675,LastTimestamp:2026-02-26 09:42:40.489854142 +0000 UTC m=+5.623799675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.797382 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c29d96f4ca12 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:40.49185845 +0000 UTC m=+5.625803983,LastTimestamp:2026-02-26 09:42:40.49185845 +0000 UTC m=+5.625803983,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.802238 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c29d9b3414fe openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:40.563115262 +0000 UTC m=+5.697060755,LastTimestamp:2026-02-26 09:42:40.563115262 +0000 UTC m=+5.697060755,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.805933 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c29d9b4647cd openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:40.564307917 +0000 UTC m=+5.698253410,LastTimestamp:2026-02-26 09:42:40.564307917 +0000 UTC m=+5.698253410,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.810459 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c29d9e171d94 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:40.611548564 +0000 UTC m=+5.745494057,LastTimestamp:2026-02-26 09:42:40.611548564 +0000 UTC m=+5.745494057,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.814003 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c29d9e2834d3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:40.612668627 +0000 UTC m=+5.746614120,LastTimestamp:2026-02-26 09:42:40.612668627 +0000 UTC m=+5.746614120,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.817918 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c29d9eb09066 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:40.621604966 +0000 UTC m=+5.755550459,LastTimestamp:2026-02-26 09:42:40.621604966 +0000 UTC m=+5.755550459,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.821192 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c29dab549afe openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:40.833682174 +0000 UTC m=+5.967627667,LastTimestamp:2026-02-26 09:42:40.833682174 +0000 UTC m=+5.967627667,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.825956 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c29dab55e88f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:40.833767567 +0000 UTC m=+5.967713060,LastTimestamp:2026-02-26 09:42:40.833767567 +0000 UTC m=+5.967713060,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.831089 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c29dabc5b6ad openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:40.841094829 +0000 UTC m=+5.975040332,LastTimestamp:2026-02-26 09:42:40.841094829 +0000 UTC m=+5.975040332,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.835355 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c29dae260a41 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:40.880962113 +0000 UTC m=+6.014907626,LastTimestamp:2026-02-26 09:42:40.880962113 +0000 UTC m=+6.014907626,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.839865 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c29db060c98e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:40.918366606 +0000 UTC m=+6.052312099,LastTimestamp:2026-02-26 09:42:40.918366606 +0000 UTC m=+6.052312099,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.843497 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897c29db0615869 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:40.918403177 +0000 UTC m=+6.052348700,LastTimestamp:2026-02-26 09:42:40.918403177 +0000 UTC m=+6.052348700,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.846715 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c29db061a99f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:40.918423967 +0000 UTC m=+6.052369460,LastTimestamp:2026-02-26 09:42:40.918423967 +0000 UTC m=+6.052369460,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.850263 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c29db06ee632 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:40.919291442 +0000 UTC m=+6.053236935,LastTimestamp:2026-02-26 09:42:40.919291442 +0000 UTC m=+6.053236935,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.854501 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c29db1a36a38 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:40.939510328 +0000 UTC m=+6.073455811,LastTimestamp:2026-02-26 09:42:40.939510328 +0000 UTC m=+6.073455811,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.856678 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c29dbcfc22a8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:41.129874088 +0000 UTC m=+6.263819571,LastTimestamp:2026-02-26 09:42:41.129874088 +0000 UTC m=+6.263819571,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.858404 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c29dc047851a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:41.185146138 +0000 UTC m=+6.319091641,LastTimestamp:2026-02-26 09:42:41.185146138 +0000 UTC m=+6.319091641,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.861606 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c29dc05ad134 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:41.186410804 +0000 UTC m=+6.320356297,LastTimestamp:2026-02-26 09:42:41.186410804 +0000 UTC m=+6.320356297,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.865595 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c29dd06c2869 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:41.455982697 +0000 UTC m=+6.589928190,LastTimestamp:2026-02-26 09:42:41.455982697 +0000 UTC m=+6.589928190,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.871687 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c29dd29c3d70 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:41.49268824 +0000 UTC m=+6.626633733,LastTimestamp:2026-02-26 09:42:41.49268824 +0000 UTC m=+6.626633733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.876252 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c29dda7a0892 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:41.62466421 +0000 UTC m=+6.758609703,LastTimestamp:2026-02-26 09:42:41.62466421 +0000 UTC m=+6.758609703,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.882523 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c29def7ad97d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:41.977039229 +0000 UTC m=+7.110984722,LastTimestamp:2026-02-26 09:42:41.977039229 +0000 UTC m=+7.110984722,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.888183 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c29df19a9d70 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:42.01267544 +0000 UTC m=+7.146620933,LastTimestamp:2026-02-26 09:42:42.01267544 +0000 UTC m=+7.146620933,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.892352 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c29df1af8ff0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:42.01404824 +0000 UTC m=+7.147993763,LastTimestamp:2026-02-26 09:42:42.01404824 +0000 UTC m=+7.147993763,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.895768 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c29dffccc62e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:42.250843694 +0000 UTC m=+7.384789187,LastTimestamp:2026-02-26 09:42:42.250843694 +0000 UTC m=+7.384789187,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.901084 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c29e012888d2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:42.273634514 +0000 UTC m=+7.407580007,LastTimestamp:2026-02-26 09:42:42.273634514 +0000 UTC m=+7.407580007,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.907677 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c29e013f5fc0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:42.275131328 +0000 UTC m=+7.409076831,LastTimestamp:2026-02-26 09:42:42.275131328 +0000 UTC m=+7.409076831,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.912150 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c29e0ed32f89 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:42.502922121 +0000 UTC m=+7.636867614,LastTimestamp:2026-02-26 09:42:42.502922121 +0000 UTC m=+7.636867614,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.916708 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c29e105d8c67 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:42.528767079 +0000 UTC m=+7.662712572,LastTimestamp:2026-02-26 09:42:42.528767079 +0000 UTC m=+7.662712572,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.920285 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c29e106dfb52 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:42.52984405 +0000 UTC m=+7.663789553,LastTimestamp:2026-02-26 09:42:42.52984405 +0000 UTC m=+7.663789553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.924235 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c29e1f3edaee openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:42.778413806 +0000 UTC m=+7.912359299,LastTimestamp:2026-02-26 09:42:42.778413806 +0000 UTC m=+7.912359299,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.930054 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c29e22ccb0f6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:42.838040822 +0000 UTC m=+7.971986325,LastTimestamp:2026-02-26 09:42:42.838040822 +0000 UTC m=+7.971986325,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.933121 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c29e22e3211c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:42.839511324 +0000 UTC m=+7.973456857,LastTimestamp:2026-02-26 09:42:42.839511324 +0000 UTC m=+7.973456857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.936662 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c29e32069e00 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:43.093495296 +0000 UTC m=+8.227440779,LastTimestamp:2026-02-26 09:42:43.093495296 +0000 UTC m=+8.227440779,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.939926 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897c29e35f84c9d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:43.159665821 +0000 UTC m=+8.293611324,LastTimestamp:2026-02-26 09:42:43.159665821 +0000 UTC m=+8.293611324,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.943507 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897c29dc05ad134\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c29dc05ad134 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:41.186410804 +0000 UTC m=+6.320356297,LastTimestamp:2026-02-26 09:42:43.660438927 +0000 UTC m=+8.794384420,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.947821 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897c29dd06c2869\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c29dd06c2869 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:41.455982697 +0000 UTC m=+6.589928190,LastTimestamp:2026-02-26 09:42:44.116432526 +0000 UTC m=+9.250378019,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.951223 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897c29dd29c3d70\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c29dd29c3d70 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:41.49268824 +0000 UTC m=+6.626633733,LastTimestamp:2026-02-26 09:42:44.23437365 +0000 UTC m=+9.368319143,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.955716 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 09:43:23 crc kubenswrapper[4760]: &Event{ObjectMeta:{kube-controller-manager-crc.1897c29f5be26603 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 26 09:43:23 crc kubenswrapper[4760]: body: Feb 26 09:43:23 crc kubenswrapper[4760]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:48.090732035 +0000 UTC m=+13.224677518,LastTimestamp:2026-02-26 09:42:48.090732035 +0000 UTC m=+13.224677518,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 09:43:23 crc kubenswrapper[4760]: > Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.959170 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c29f5be36e6f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:48.090799727 +0000 UTC m=+13.224745220,LastTimestamp:2026-02-26 09:42:48.090799727 +0000 UTC m=+13.224745220,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.964664 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 26 09:43:23 crc kubenswrapper[4760]: &Event{ObjectMeta:{kube-apiserver-crc.1897c2a09587303f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 26 09:43:23 crc kubenswrapper[4760]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 26 09:43:23 crc kubenswrapper[4760]: Feb 26 09:43:23 crc kubenswrapper[4760]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:53.352800319 +0000 UTC m=+18.486745812,LastTimestamp:2026-02-26 09:42:53.352800319 +0000 UTC m=+18.486745812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 09:43:23 crc kubenswrapper[4760]: > Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.968706 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c2a09587c6f4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:53.3528389 +0000 UTC m=+18.486784393,LastTimestamp:2026-02-26 09:42:53.3528389 +0000 UTC m=+18.486784393,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.973771 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897c2a09587303f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 26 09:43:23 crc kubenswrapper[4760]: &Event{ObjectMeta:{kube-apiserver-crc.1897c2a09587303f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 26 09:43:23 crc kubenswrapper[4760]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 26 09:43:23 crc kubenswrapper[4760]: Feb 26 09:43:23 crc kubenswrapper[4760]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:53.352800319 +0000 UTC m=+18.486745812,LastTimestamp:2026-02-26 09:42:53.358231336 +0000 UTC m=+18.492176829,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 09:43:23 crc kubenswrapper[4760]: > Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.978669 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897c2a09587c6f4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897c2a09587c6f4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:53.3528389 +0000 UTC m=+18.486784393,LastTimestamp:2026-02-26 09:42:53.358282258 +0000 UTC m=+18.492227751,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.983246 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 09:43:23 crc kubenswrapper[4760]: &Event{ObjectMeta:{kube-controller-manager-crc.1897c2a1aff3c805 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 26 09:43:23 crc kubenswrapper[4760]: body: Feb 26 09:43:23 crc kubenswrapper[4760]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:58.091091973 +0000 UTC m=+23.225037466,LastTimestamp:2026-02-26 09:42:58.091091973 +0000 UTC m=+23.225037466,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 09:43:23 crc kubenswrapper[4760]: > Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.988692 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c2a1aff4aaaa openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:58.091149994 +0000 UTC m=+23.225095487,LastTimestamp:2026-02-26 09:42:58.091149994 +0000 UTC m=+23.225095487,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:23 crc kubenswrapper[4760]: E0226 09:43:23.995172 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897c29f5be26603\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 09:43:23 crc kubenswrapper[4760]: &Event{ObjectMeta:{kube-controller-manager-crc.1897c29f5be26603 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 26 09:43:23 crc kubenswrapper[4760]: body: Feb 26 09:43:23 crc kubenswrapper[4760]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:48.090732035 +0000 UTC m=+13.224677518,LastTimestamp:2026-02-26 09:43:08.090817351 +0000 UTC m=+33.224762884,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 09:43:23 crc kubenswrapper[4760]: > Feb 26 09:43:24 crc kubenswrapper[4760]: E0226 09:43:24.001633 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897c29f5be36e6f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c29f5be36e6f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:48.090799727 +0000 UTC m=+13.224745220,LastTimestamp:2026-02-26 09:43:08.090923344 +0000 UTC m=+33.224868877,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:24 crc kubenswrapper[4760]: E0226 09:43:24.006839 4760 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c2a40425e099 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:43:08.093595801 +0000 UTC m=+33.227541294,LastTimestamp:2026-02-26 09:43:08.093595801 +0000 UTC m=+33.227541294,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:24 crc kubenswrapper[4760]: E0226 09:43:24.010947 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897c29d539be2ea\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c29d539be2ea openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:39.361958634 +0000 UTC m=+4.495904127,LastTimestamp:2026-02-26 09:43:08.210869786 +0000 UTC m=+33.344815289,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:24 crc kubenswrapper[4760]: E0226 09:43:24.014650 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897c29d777af84a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c29d777af84a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:39.963781194 +0000 UTC m=+5.097726707,LastTimestamp:2026-02-26 09:43:08.390116164 +0000 UTC m=+33.524061657,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:24 crc kubenswrapper[4760]: E0226 09:43:24.020082 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897c29d7ca868a6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c29d7ca868a6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:40.050645158 +0000 UTC m=+5.184590691,LastTimestamp:2026-02-26 09:43:08.399433504 +0000 UTC m=+33.533379007,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:24 crc kubenswrapper[4760]: E0226 09:43:24.027292 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897c2a1aff3c805\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 09:43:24 crc kubenswrapper[4760]: &Event{ObjectMeta:{kube-controller-manager-crc.1897c2a1aff3c805 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 26 09:43:24 crc kubenswrapper[4760]: body: Feb 26 09:43:24 crc kubenswrapper[4760]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:58.091091973 +0000 UTC m=+23.225037466,LastTimestamp:2026-02-26 09:43:18.090892089 +0000 UTC m=+43.224837602,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 09:43:24 crc kubenswrapper[4760]: > Feb 26 09:43:24 crc kubenswrapper[4760]: E0226 09:43:24.031066 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897c2a1aff4aaaa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897c2a1aff4aaaa openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:58.091149994 +0000 UTC m=+23.225095487,LastTimestamp:2026-02-26 09:43:18.090971391 +0000 UTC m=+43.224916884,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 09:43:24 crc kubenswrapper[4760]: I0226 09:43:24.440925 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:25 crc kubenswrapper[4760]: I0226 09:43:25.440066 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:26 crc kubenswrapper[4760]: I0226 09:43:26.203127 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:43:26 crc kubenswrapper[4760]: I0226 09:43:26.203436 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:43:26 crc kubenswrapper[4760]: I0226 09:43:26.204969 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:26 crc kubenswrapper[4760]: I0226 09:43:26.205006 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:26 crc kubenswrapper[4760]: I0226 09:43:26.205018 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:26 crc kubenswrapper[4760]: I0226 09:43:26.205539 4760 scope.go:117] "RemoveContainer" containerID="239c06375c0fd99aebf7862818eecfb19076b926203ac1dcd7849726fb12f82f" Feb 26 09:43:26 crc kubenswrapper[4760]: E0226 09:43:26.205713 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 09:43:26 crc kubenswrapper[4760]: I0226 09:43:26.441878 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:26 crc kubenswrapper[4760]: E0226 09:43:26.670005 4760 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 09:43:27 crc kubenswrapper[4760]: I0226 09:43:27.441744 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:28 crc kubenswrapper[4760]: I0226 09:43:28.090533 4760 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:43:28 crc kubenswrapper[4760]: I0226 09:43:28.090652 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:43:28 crc kubenswrapper[4760]: E0226 09:43:28.095225 4760 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897c2a1aff3c805\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 09:43:28 crc kubenswrapper[4760]: &Event{ObjectMeta:{kube-controller-manager-crc.1897c2a1aff3c805 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 26 09:43:28 crc kubenswrapper[4760]: body: Feb 26 09:43:28 crc kubenswrapper[4760]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:42:58.091091973 +0000 UTC m=+23.225037466,LastTimestamp:2026-02-26 09:43:28.09062206 +0000 UTC m=+53.224567563,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 09:43:28 crc kubenswrapper[4760]: > Feb 26 09:43:28 crc kubenswrapper[4760]: E0226 09:43:28.375257 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 26 09:43:28 crc kubenswrapper[4760]: I0226 09:43:28.379349 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:43:28 crc kubenswrapper[4760]: I0226 09:43:28.381029 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:28 crc kubenswrapper[4760]: I0226 09:43:28.381142 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:28 crc kubenswrapper[4760]: I0226 09:43:28.381240 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:28 crc kubenswrapper[4760]: I0226 09:43:28.381350 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 09:43:28 crc kubenswrapper[4760]: E0226 09:43:28.386013 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 26 09:43:28 crc kubenswrapper[4760]: I0226 09:43:28.442290 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:29 crc kubenswrapper[4760]: I0226 09:43:29.444740 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:29 crc kubenswrapper[4760]: I0226 09:43:29.706694 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 09:43:29 crc kubenswrapper[4760]: I0226 09:43:29.706886 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:43:29 crc kubenswrapper[4760]: I0226 09:43:29.708362 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:29 crc kubenswrapper[4760]: I0226 09:43:29.708424 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:29 crc kubenswrapper[4760]: I0226 09:43:29.708443 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:30 crc kubenswrapper[4760]: I0226 09:43:30.441428 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:31 crc kubenswrapper[4760]: I0226 09:43:31.443361 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:32 crc kubenswrapper[4760]: I0226 09:43:32.448690 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:33 crc kubenswrapper[4760]: I0226 09:43:33.441016 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:34 crc kubenswrapper[4760]: I0226 09:43:34.441315 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:35 crc kubenswrapper[4760]: I0226 09:43:35.094560 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 09:43:35 crc kubenswrapper[4760]: I0226 09:43:35.094776 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:43:35 crc kubenswrapper[4760]: I0226 09:43:35.095805 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:35 crc kubenswrapper[4760]: I0226 09:43:35.095832 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:35 crc kubenswrapper[4760]: I0226 09:43:35.095842 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:35 crc kubenswrapper[4760]: I0226 09:43:35.097824 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 09:43:35 crc kubenswrapper[4760]: E0226 09:43:35.380779 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 26 09:43:35 crc kubenswrapper[4760]: I0226 09:43:35.386830 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:43:35 crc kubenswrapper[4760]: I0226 09:43:35.388409 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:35 crc kubenswrapper[4760]: I0226 09:43:35.388462 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:35 crc kubenswrapper[4760]: I0226 09:43:35.388482 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:35 crc kubenswrapper[4760]: I0226 09:43:35.388527 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 09:43:35 crc kubenswrapper[4760]: E0226 09:43:35.393189 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 26 09:43:35 crc kubenswrapper[4760]: I0226 09:43:35.442465 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:35 crc kubenswrapper[4760]: I0226 09:43:35.814907 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:43:35 crc kubenswrapper[4760]: I0226 09:43:35.816001 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:35 crc kubenswrapper[4760]: I0226 09:43:35.816044 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:35 crc kubenswrapper[4760]: I0226 09:43:35.816053 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:36 crc kubenswrapper[4760]: I0226 09:43:36.442265 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:36 crc kubenswrapper[4760]: E0226 09:43:36.670874 4760 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 09:43:37 crc kubenswrapper[4760]: I0226 09:43:37.441018 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:38 crc kubenswrapper[4760]: I0226 09:43:38.442622 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:39 crc kubenswrapper[4760]: I0226 09:43:39.442912 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:40 crc kubenswrapper[4760]: I0226 09:43:40.442200 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:41 crc kubenswrapper[4760]: I0226 09:43:41.454371 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:41 crc kubenswrapper[4760]: I0226 09:43:41.576314 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:43:41 crc kubenswrapper[4760]: I0226 09:43:41.577818 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:41 crc kubenswrapper[4760]: I0226 09:43:41.577852 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:41 crc kubenswrapper[4760]: I0226 09:43:41.577866 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:41 crc kubenswrapper[4760]: I0226 09:43:41.578389 4760 scope.go:117] "RemoveContainer" containerID="239c06375c0fd99aebf7862818eecfb19076b926203ac1dcd7849726fb12f82f" Feb 26 09:43:41 crc kubenswrapper[4760]: W0226 09:43:41.660193 4760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:41 crc kubenswrapper[4760]: E0226 09:43:41.660258 4760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 26 09:43:41 crc kubenswrapper[4760]: I0226 09:43:41.832322 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 26 09:43:41 crc kubenswrapper[4760]: I0226 09:43:41.834624 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6a42004a8b808c4c7fbf7c8f2872c56e8a3de2367477d08143604816366a17b5"} Feb 26 09:43:41 crc kubenswrapper[4760]: I0226 09:43:41.834844 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:43:41 crc kubenswrapper[4760]: I0226 09:43:41.836029 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:41 crc kubenswrapper[4760]: I0226 09:43:41.836076 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:41 crc kubenswrapper[4760]: I0226 09:43:41.836094 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:42 crc kubenswrapper[4760]: E0226 09:43:42.386442 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 26 09:43:42 crc kubenswrapper[4760]: I0226 09:43:42.393401 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:43:42 crc kubenswrapper[4760]: I0226 09:43:42.394865 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:42 crc kubenswrapper[4760]: I0226 09:43:42.394916 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:42 crc kubenswrapper[4760]: I0226 09:43:42.394930 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:42 crc kubenswrapper[4760]: I0226 09:43:42.394957 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 09:43:42 crc kubenswrapper[4760]: E0226 09:43:42.399659 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 26 09:43:42 crc kubenswrapper[4760]: I0226 09:43:42.442625 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:42 crc kubenswrapper[4760]: I0226 09:43:42.724610 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 09:43:42 crc kubenswrapper[4760]: I0226 09:43:42.741682 4760 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 26 09:43:42 crc kubenswrapper[4760]: I0226 09:43:42.840908 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 09:43:42 crc kubenswrapper[4760]: I0226 09:43:42.841355 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 26 09:43:42 crc kubenswrapper[4760]: I0226 09:43:42.843552 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6a42004a8b808c4c7fbf7c8f2872c56e8a3de2367477d08143604816366a17b5" exitCode=255 Feb 26 09:43:42 crc kubenswrapper[4760]: I0226 09:43:42.843648 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6a42004a8b808c4c7fbf7c8f2872c56e8a3de2367477d08143604816366a17b5"} Feb 26 09:43:42 crc kubenswrapper[4760]: I0226 09:43:42.843740 4760 scope.go:117] "RemoveContainer" containerID="239c06375c0fd99aebf7862818eecfb19076b926203ac1dcd7849726fb12f82f" Feb 26 09:43:42 crc kubenswrapper[4760]: I0226 09:43:42.843957 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:43:42 crc kubenswrapper[4760]: I0226 09:43:42.844906 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:42 crc kubenswrapper[4760]: I0226 09:43:42.844949 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:42 crc kubenswrapper[4760]: I0226 09:43:42.844964 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:42 crc kubenswrapper[4760]: I0226 09:43:42.845623 4760 scope.go:117] "RemoveContainer" containerID="6a42004a8b808c4c7fbf7c8f2872c56e8a3de2367477d08143604816366a17b5" Feb 26 09:43:42 crc kubenswrapper[4760]: E0226 09:43:42.845833 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 09:43:43 crc kubenswrapper[4760]: I0226 09:43:43.450677 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:43 crc kubenswrapper[4760]: I0226 09:43:43.847630 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 09:43:44 crc kubenswrapper[4760]: I0226 09:43:44.441444 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:45 crc kubenswrapper[4760]: I0226 09:43:45.440857 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:46 crc kubenswrapper[4760]: I0226 09:43:46.203218 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:43:46 crc kubenswrapper[4760]: I0226 09:43:46.203416 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:43:46 crc kubenswrapper[4760]: I0226 09:43:46.204639 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:46 crc kubenswrapper[4760]: I0226 09:43:46.204697 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:46 crc kubenswrapper[4760]: I0226 09:43:46.204719 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:46 crc kubenswrapper[4760]: I0226 09:43:46.205362 4760 scope.go:117] "RemoveContainer" containerID="6a42004a8b808c4c7fbf7c8f2872c56e8a3de2367477d08143604816366a17b5" Feb 26 09:43:46 crc kubenswrapper[4760]: E0226 09:43:46.205612 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 09:43:46 crc kubenswrapper[4760]: I0226 09:43:46.442026 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:46 crc kubenswrapper[4760]: E0226 09:43:46.672030 4760 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 09:43:47 crc kubenswrapper[4760]: I0226 09:43:47.442087 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:48 crc kubenswrapper[4760]: I0226 09:43:48.441313 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:49 crc kubenswrapper[4760]: E0226 09:43:49.391207 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 26 09:43:49 crc kubenswrapper[4760]: I0226 09:43:49.400336 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:43:49 crc kubenswrapper[4760]: I0226 09:43:49.401418 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:49 crc kubenswrapper[4760]: I0226 09:43:49.401448 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:49 crc kubenswrapper[4760]: I0226 09:43:49.401457 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:49 crc kubenswrapper[4760]: I0226 09:43:49.401478 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 09:43:49 crc kubenswrapper[4760]: E0226 09:43:49.404884 4760 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 26 09:43:49 crc kubenswrapper[4760]: I0226 09:43:49.441175 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:50 crc kubenswrapper[4760]: I0226 09:43:50.440568 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:51 crc kubenswrapper[4760]: I0226 09:43:51.441377 4760 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 09:43:52 crc kubenswrapper[4760]: I0226 09:43:52.211439 4760 csr.go:261] certificate signing request csr-96l4g is approved, waiting to be issued Feb 26 09:43:52 crc kubenswrapper[4760]: I0226 09:43:52.219489 4760 csr.go:257] certificate signing request csr-96l4g is issued Feb 26 09:43:52 crc kubenswrapper[4760]: I0226 09:43:52.325982 4760 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 26 09:43:52 crc kubenswrapper[4760]: I0226 09:43:52.596613 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:43:52 crc kubenswrapper[4760]: I0226 09:43:52.596826 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:43:52 crc kubenswrapper[4760]: I0226 09:43:52.598043 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:52 crc kubenswrapper[4760]: I0226 09:43:52.598228 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:52 crc kubenswrapper[4760]: I0226 09:43:52.598385 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:52 crc kubenswrapper[4760]: I0226 09:43:52.599258 4760 scope.go:117] "RemoveContainer" containerID="6a42004a8b808c4c7fbf7c8f2872c56e8a3de2367477d08143604816366a17b5" Feb 26 09:43:52 crc kubenswrapper[4760]: E0226 09:43:52.599567 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 09:43:53 crc kubenswrapper[4760]: I0226 09:43:53.137025 4760 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 26 09:43:53 crc kubenswrapper[4760]: I0226 09:43:53.221037 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-09 16:44:44.410752624 +0000 UTC Feb 26 09:43:53 crc kubenswrapper[4760]: I0226 09:43:53.221078 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7615h0m51.189677429s for next certificate rotation Feb 26 09:43:56 crc kubenswrapper[4760]: I0226 09:43:56.406006 4760 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 09:43:56 crc kubenswrapper[4760]: I0226 09:43:56.407538 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:56 crc kubenswrapper[4760]: I0226 09:43:56.407594 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:56 crc kubenswrapper[4760]: I0226 09:43:56.407610 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:56 crc kubenswrapper[4760]: I0226 09:43:56.407720 4760 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 09:43:56 crc kubenswrapper[4760]: I0226 09:43:56.415304 4760 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 26 09:43:56 crc kubenswrapper[4760]: I0226 09:43:56.415641 4760 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 26 09:43:56 crc kubenswrapper[4760]: E0226 09:43:56.415668 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 26 09:43:56 crc kubenswrapper[4760]: I0226 09:43:56.419131 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:56 crc kubenswrapper[4760]: I0226 09:43:56.419170 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:56 crc kubenswrapper[4760]: I0226 09:43:56.419181 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:56 crc kubenswrapper[4760]: I0226 09:43:56.419201 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:43:56 crc kubenswrapper[4760]: I0226 09:43:56.419618 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:43:56Z","lastTransitionTime":"2026-02-26T09:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:43:56 crc kubenswrapper[4760]: E0226 09:43:56.432269 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"033b4752-b4ba-4135-ad78-818bf8875f86\\\",\\\"systemUUID\\\":\\\"d0ce6fb9-1a58-4f12-a8d7-d211a8dd8bec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 09:43:56 crc kubenswrapper[4760]: I0226 09:43:56.438355 4760 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 26 09:43:56 crc kubenswrapper[4760]: I0226 09:43:56.441858 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:56 crc kubenswrapper[4760]: I0226 09:43:56.441998 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:56 crc kubenswrapper[4760]: I0226 09:43:56.442098 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:56 crc kubenswrapper[4760]: I0226 09:43:56.442179 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:43:56 crc kubenswrapper[4760]: I0226 09:43:56.442260 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:43:56Z","lastTransitionTime":"2026-02-26T09:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:43:56 crc kubenswrapper[4760]: E0226 09:43:56.454846 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"033b4752-b4ba-4135-ad78-818bf8875f86\\\",\\\"systemUUID\\\":\\\"d0ce6fb9-1a58-4f12-a8d7-d211a8dd8bec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 09:43:56 crc kubenswrapper[4760]: I0226 09:43:56.459014 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:56 crc kubenswrapper[4760]: I0226 09:43:56.459065 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:56 crc kubenswrapper[4760]: I0226 09:43:56.459081 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:56 crc kubenswrapper[4760]: I0226 09:43:56.459104 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:43:56 crc kubenswrapper[4760]: I0226 09:43:56.459121 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:43:56Z","lastTransitionTime":"2026-02-26T09:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:43:56 crc kubenswrapper[4760]: E0226 09:43:56.471793 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"033b4752-b4ba-4135-ad78-818bf8875f86\\\",\\\"systemUUID\\\":\\\"d0ce6fb9-1a58-4f12-a8d7-d211a8dd8bec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 09:43:56 crc kubenswrapper[4760]: I0226 09:43:56.475797 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:43:56 crc kubenswrapper[4760]: I0226 09:43:56.475833 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:43:56 crc kubenswrapper[4760]: I0226 09:43:56.475842 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:43:56 crc kubenswrapper[4760]: I0226 09:43:56.475858 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:43:56 crc kubenswrapper[4760]: I0226 09:43:56.475870 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:43:56Z","lastTransitionTime":"2026-02-26T09:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:43:56 crc kubenswrapper[4760]: E0226 09:43:56.488721 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T09:43:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"033b4752-b4ba-4135-ad78-818bf8875f86\\\",\\\"systemUUID\\\":\\\"d0ce6fb9-1a58-4f12-a8d7-d211a8dd8bec\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 09:43:56 crc kubenswrapper[4760]: E0226 09:43:56.488834 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 09:43:56 crc kubenswrapper[4760]: E0226 09:43:56.488863 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:56 crc kubenswrapper[4760]: E0226 09:43:56.589250 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:56 crc kubenswrapper[4760]: E0226 09:43:56.672448 4760 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 09:43:56 crc kubenswrapper[4760]: E0226 09:43:56.689635 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:56 crc kubenswrapper[4760]: E0226 09:43:56.790060 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:56 crc kubenswrapper[4760]: E0226 09:43:56.890182 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:56 crc kubenswrapper[4760]: E0226 09:43:56.990501 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:57 crc kubenswrapper[4760]: E0226 09:43:57.091620 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:57 crc kubenswrapper[4760]: E0226 09:43:57.192514 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:57 crc kubenswrapper[4760]: E0226 09:43:57.293634 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:57 crc kubenswrapper[4760]: E0226 09:43:57.394449 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:57 crc kubenswrapper[4760]: E0226 09:43:57.494947 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:57 crc kubenswrapper[4760]: E0226 09:43:57.595743 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:57 crc kubenswrapper[4760]: E0226 09:43:57.696904 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:57 crc kubenswrapper[4760]: E0226 09:43:57.797016 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:57 crc kubenswrapper[4760]: E0226 09:43:57.897248 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:57 crc kubenswrapper[4760]: E0226 09:43:57.997866 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:58 crc kubenswrapper[4760]: E0226 09:43:58.099001 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:58 crc kubenswrapper[4760]: E0226 09:43:58.199086 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:58 crc kubenswrapper[4760]: E0226 09:43:58.300058 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:58 crc kubenswrapper[4760]: E0226 09:43:58.400559 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:58 crc kubenswrapper[4760]: E0226 09:43:58.501141 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:58 crc kubenswrapper[4760]: E0226 09:43:58.602043 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:58 crc kubenswrapper[4760]: I0226 09:43:58.699997 4760 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 26 09:43:58 crc kubenswrapper[4760]: E0226 09:43:58.702257 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:58 crc kubenswrapper[4760]: E0226 09:43:58.802669 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:58 crc kubenswrapper[4760]: E0226 09:43:58.903186 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:59 crc kubenswrapper[4760]: E0226 09:43:59.004268 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:59 crc kubenswrapper[4760]: E0226 09:43:59.105386 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:59 crc kubenswrapper[4760]: E0226 09:43:59.205803 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:59 crc kubenswrapper[4760]: E0226 09:43:59.306742 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:59 crc kubenswrapper[4760]: E0226 09:43:59.407695 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:59 crc kubenswrapper[4760]: E0226 09:43:59.508022 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:59 crc kubenswrapper[4760]: E0226 09:43:59.608351 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:59 crc kubenswrapper[4760]: E0226 09:43:59.708664 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:59 crc kubenswrapper[4760]: E0226 09:43:59.809631 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:43:59 crc kubenswrapper[4760]: E0226 09:43:59.910735 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:44:00 crc kubenswrapper[4760]: E0226 09:44:00.011392 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:44:00 crc kubenswrapper[4760]: E0226 09:44:00.111584 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:44:00 crc kubenswrapper[4760]: E0226 09:44:00.212103 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:44:00 crc kubenswrapper[4760]: E0226 09:44:00.312643 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:44:00 crc kubenswrapper[4760]: E0226 09:44:00.413113 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:44:00 crc kubenswrapper[4760]: E0226 09:44:00.514065 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:44:00 crc kubenswrapper[4760]: E0226 09:44:00.615210 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:44:00 crc kubenswrapper[4760]: E0226 09:44:00.716130 4760 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 09:44:00 crc kubenswrapper[4760]: I0226 09:44:00.731534 4760 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 26 09:44:00 crc kubenswrapper[4760]: I0226 09:44:00.828266 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:00 crc kubenswrapper[4760]: I0226 09:44:00.828314 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:00 crc kubenswrapper[4760]: I0226 09:44:00.828326 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:00 crc kubenswrapper[4760]: I0226 09:44:00.828344 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:00 crc kubenswrapper[4760]: I0226 09:44:00.828357 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:00Z","lastTransitionTime":"2026-02-26T09:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:00 crc kubenswrapper[4760]: I0226 09:44:00.931701 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:00 crc kubenswrapper[4760]: I0226 09:44:00.931747 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:00 crc kubenswrapper[4760]: I0226 09:44:00.931759 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:00 crc kubenswrapper[4760]: I0226 09:44:00.931776 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:00 crc kubenswrapper[4760]: I0226 09:44:00.931790 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:00Z","lastTransitionTime":"2026-02-26T09:44:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.035199 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.035308 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.035337 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.035368 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.035390 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:01Z","lastTransitionTime":"2026-02-26T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.137094 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.137153 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.137161 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.137174 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.137185 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:01Z","lastTransitionTime":"2026-02-26T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.240811 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.240911 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.240936 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.240967 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.240989 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:01Z","lastTransitionTime":"2026-02-26T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.343796 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.343833 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.343846 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.343865 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.343878 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:01Z","lastTransitionTime":"2026-02-26T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.446928 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.446981 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.446999 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.447027 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.447045 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:01Z","lastTransitionTime":"2026-02-26T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.459531 4760 apiserver.go:52] "Watching apiserver" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.474504 4760 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.475014 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.475633 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.476041 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.476074 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.476168 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 09:44:01 crc kubenswrapper[4760]: E0226 09:44:01.476380 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 09:44:01 crc kubenswrapper[4760]: E0226 09:44:01.476591 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.476617 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.476642 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:01 crc kubenswrapper[4760]: E0226 09:44:01.476704 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.478978 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.479976 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.480160 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.480240 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.480329 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.480521 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.480666 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.480827 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.482168 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.513667 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.524401 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.534062 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.546273 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.549894 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.549936 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.549949 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.549973 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.549987 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:01Z","lastTransitionTime":"2026-02-26T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.554769 4760 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.559995 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.573831 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.582104 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.582176 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.582202 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.582234 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.582259 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.582281 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.582697 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.582757 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.582870 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.582710 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.582865 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.582893 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.582984 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.583155 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.583234 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.583361 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.583398 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.583795 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.583817 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.583837 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.583828 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.583869 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.583880 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.583893 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.583914 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.583935 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.583955 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.583975 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.584274 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.584372 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.584441 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.584799 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.584856 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.584921 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.584986 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.585046 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.585074 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.585174 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.585351 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.585450 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.585455 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.585476 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.585521 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.585745 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.585810 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.585545 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.585867 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.585941 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.586004 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.586220 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.585970 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.586340 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.586368 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.586370 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.586392 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.586417 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.586426 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.586444 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.586471 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.586496 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.586525 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.586548 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.586590 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.586615 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.586761 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.586836 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.586862 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.586886 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.586907 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.586929 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.586951 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.586973 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.586995 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.587015 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.587038 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.587064 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.586533 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.586657 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.586721 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.587157 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.587149 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: E0226 09:44:01.587692 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:02.087665171 +0000 UTC m=+87.221610664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.587752 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.588090 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.588336 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.587563 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.587610 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.587350 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.588416 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.588744 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.589000 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.589059 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.589106 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.589123 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.589122 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.589139 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.589190 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.589237 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.589258 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.589276 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.589295 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.589312 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.589331 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.589348 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.589367 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.589386 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.589402 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.589421 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.589453 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.589476 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.589505 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.589512 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.589528 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.589636 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.589696 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.589887 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.590256 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.590288 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.590299 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.590324 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.590341 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.590358 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.590374 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.590442 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.590548 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.590636 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.590733 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.590804 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.591185 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.591365 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.591545 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.591681 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.591702 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.591838 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.592072 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.592067 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.592102 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.592227 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.592162 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.592334 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.592361 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.592409 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.592890 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.592969 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.593046 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.593076 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.593117 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.593639 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.594063 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.594978 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.596154 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.597507 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.596680 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.598607 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.598658 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.598680 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.598699 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.598717 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.598735 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.598751 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.598769 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.598786 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.598803 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.598819 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.598836 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.598853 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.598871 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.598887 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.598902 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.598917 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.598932 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.598949 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.598965 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.598981 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.598998 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.599013 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.599030 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.599045 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.599060 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.599077 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.599095 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.599110 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.599126 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.599141 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.599156 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.599173 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.599193 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.599215 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.599510 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.599946 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.600438 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.600471 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.600528 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.600853 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.601036 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.601047 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.601241 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.601418 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.601443 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.601484 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.601939 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.602211 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.602220 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.602207 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.602247 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.602305 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.602322 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.602346 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.602402 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.602453 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.602488 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.602522 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.602568 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.602616 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.602623 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.602671 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.602705 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.602719 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.602738 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.602776 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.602813 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.602846 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.602878 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.602930 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.602968 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.603015 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.603046 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.603080 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.603126 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.603160 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.603209 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.603257 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.603304 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.603352 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.603389 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.603420 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.603453 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.603487 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.603524 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.603560 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.603620 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.603661 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.603693 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.603751 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.603788 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.603824 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.603857 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.603888 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.603919 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.603954 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604005 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604041 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604095 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604130 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604163 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604197 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604231 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604265 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604299 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604334 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604366 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604401 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604433 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604465 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604528 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604564 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604656 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604694 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604753 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604807 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604843 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604878 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604912 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604961 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604997 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.605035 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.605068 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.605106 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.605159 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.605229 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.605266 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.605302 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.605336 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.605391 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.605426 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.605462 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.605496 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.605545 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.605652 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.605693 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.605732 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.605769 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.605805 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.605853 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.605911 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.605948 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.605998 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606034 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606089 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606124 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606169 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606205 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606303 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606329 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606351 4760 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606391 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606411 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606432 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606451 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606470 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606490 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606551 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606589 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606609 4760 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606629 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606663 4760 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606683 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606701 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606719 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606752 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606770 4760 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606789 4760 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606806 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606827 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606862 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606882 4760 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606901 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606919 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606938 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606957 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606975 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606993 4760 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.602739 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.602793 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.602791 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.603035 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.603538 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.603596 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.603648 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604040 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604231 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604338 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604373 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604876 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604916 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.604987 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.605078 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.605257 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.605310 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.605439 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.605537 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.605723 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606453 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606667 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606767 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606786 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606992 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.606999 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: E0226 09:44:01.607096 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 09:44:01 crc kubenswrapper[4760]: E0226 09:44:01.607323 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:02.10730151 +0000 UTC m=+87.241247003 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.607882 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.607968 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.608127 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.607020 4760 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.608163 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.608175 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.608187 4760 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.608171 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.608196 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.608279 4760 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.608318 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.608348 4760 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.608377 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.608359 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.608410 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.608410 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.608447 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.608546 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.608555 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.608642 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.608662 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.608861 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.609014 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.609043 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.609017 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.609274 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.609332 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.609340 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.609502 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.609550 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.609755 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.610020 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.610135 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: E0226 09:44:01.610167 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 09:44:01 crc kubenswrapper[4760]: E0226 09:44:01.610252 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:02.110222663 +0000 UTC m=+87.244168246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.610345 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.610364 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.610558 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.610679 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.610829 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.610808 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.611167 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.611474 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.611748 4760 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.612731 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.614132 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.613670 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.614378 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.614476 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.615363 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.615503 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.616685 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.620396 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.620643 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.620688 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.621145 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.621901 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.621992 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.622040 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.608667 4760 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.622270 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.622309 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.622340 4760 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.622376 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.622407 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.622437 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.622464 4760 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.622496 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.622524 4760 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.622551 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.622615 4760 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.622646 4760 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.622678 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.622709 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.622740 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.622768 4760 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.622800 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.622828 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.622856 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.622883 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.622912 4760 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.622939 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.622965 4760 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.622993 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.623020 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.623053 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.623081 4760 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.623109 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.623137 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.623164 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.623192 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.623222 4760 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.623250 4760 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.623277 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.623294 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.623305 4760 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.623374 4760 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.623404 4760 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.623434 4760 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.623464 4760 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.623490 4760 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.623521 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.623549 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.623617 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.623650 4760 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.623919 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.623956 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.624181 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.624213 4760 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: E0226 09:44:01.624562 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 09:44:01 crc kubenswrapper[4760]: E0226 09:44:01.624695 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 09:44:01 crc kubenswrapper[4760]: E0226 09:44:01.624722 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.624754 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: E0226 09:44:01.624822 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:02.124792058 +0000 UTC m=+87.258737631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.625482 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 09:44:01 crc kubenswrapper[4760]: E0226 09:44:01.626284 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 09:44:01 crc kubenswrapper[4760]: E0226 09:44:01.626309 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 09:44:01 crc kubenswrapper[4760]: E0226 09:44:01.626321 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.626436 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: E0226 09:44:01.626502 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:02.126486336 +0000 UTC m=+87.260431819 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.626789 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.626968 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.630936 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.631987 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.632026 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.632028 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.632187 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.632190 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.632290 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.632642 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.632968 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.633104 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.633251 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.634243 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.634500 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.637302 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.637375 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.637783 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.639511 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.640676 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.640767 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.641192 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.641432 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.642161 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.642500 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.642618 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.642640 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.643704 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.643736 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.644072 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.645686 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.647355 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.647567 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.647628 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.648421 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.651854 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.651896 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.651910 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.651928 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.651939 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:01Z","lastTransitionTime":"2026-02-26T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.652202 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.664663 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725088 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725137 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725208 4760 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725236 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725258 4760 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725271 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725263 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725283 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725376 4760 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725399 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725419 4760 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725438 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725458 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725477 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725497 4760 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725342 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725518 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725634 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725665 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725687 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725707 4760 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725727 4760 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725750 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725770 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725789 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725808 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725827 4760 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725845 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725864 4760 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725882 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725900 4760 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725919 4760 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725937 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.725955 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726125 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726143 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726162 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726179 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726197 4760 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726215 4760 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726233 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726251 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726268 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726286 4760 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726305 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726323 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726340 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726359 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726377 4760 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726398 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726416 4760 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726434 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726451 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726469 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726487 4760 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726506 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726524 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726542 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726559 4760 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726610 4760 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726628 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726646 4760 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726664 4760 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726682 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726700 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726718 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726738 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726760 4760 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726778 4760 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726796 4760 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726813 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726831 4760 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726848 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726866 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726884 4760 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726902 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726921 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726938 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726956 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726974 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.726992 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.727010 4760 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.727175 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.727194 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.727212 4760 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.727230 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.727248 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.727266 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.727283 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.727301 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.727318 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.727336 4760 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.727353 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.727371 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.727388 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.727407 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.727428 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.727445 4760 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.727463 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.727482 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.727501 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.727519 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.727536 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.727556 4760 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.727599 4760 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.727618 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.727636 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.754901 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.754968 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.755003 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.755051 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.755072 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:01Z","lastTransitionTime":"2026-02-26T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.804833 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.810164 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.816728 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.858427 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.858493 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.858505 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.858538 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.858549 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:01Z","lastTransitionTime":"2026-02-26T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.890766 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f03d0b514a60f27c7f7f829903a1defd6ae5cc48fc218d3847ee94cdb86b47dc"} Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.891844 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a4650daaf2d81a29b43038bfcb384ab4de2dcc5c73cbcade4437d605b6be73e6"} Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.892802 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"661a050db2cf882ddfc80b9f022d4c1142caa4807e7ed5af67e682ca3dfc9bec"} Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.963498 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.963562 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.963605 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.963633 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:01 crc kubenswrapper[4760]: I0226 09:44:01.963658 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:01Z","lastTransitionTime":"2026-02-26T09:44:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.066139 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.066183 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.066192 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.066206 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.066215 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:02Z","lastTransitionTime":"2026-02-26T09:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.131785 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.131905 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 09:44:02 crc kubenswrapper[4760]: E0226 09:44:02.131965 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:03.131935589 +0000 UTC m=+88.265881092 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.132026 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 09:44:02 crc kubenswrapper[4760]: E0226 09:44:02.132060 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 09:44:02 crc kubenswrapper[4760]: E0226 09:44:02.132121 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.132134 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:02 crc kubenswrapper[4760]: E0226 09:44:02.132139 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 09:44:02 crc kubenswrapper[4760]: E0226 09:44:02.132252 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:03.132235138 +0000 UTC m=+88.266180701 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 09:44:02 crc kubenswrapper[4760]: E0226 09:44:02.132180 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 09:44:02 crc kubenswrapper[4760]: E0226 09:44:02.132290 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 09:44:02 crc kubenswrapper[4760]: E0226 09:44:02.132300 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 09:44:02 crc kubenswrapper[4760]: E0226 09:44:02.132314 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 09:44:02 crc kubenswrapper[4760]: E0226 09:44:02.132335 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:03.132325211 +0000 UTC m=+88.266270714 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.132187 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:02 crc kubenswrapper[4760]: E0226 09:44:02.132359 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:03.132344261 +0000 UTC m=+88.266289834 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 09:44:02 crc kubenswrapper[4760]: E0226 09:44:02.132183 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 09:44:02 crc kubenswrapper[4760]: E0226 09:44:02.132402 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:03.132391372 +0000 UTC m=+88.266336975 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.169280 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.169319 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.169331 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.169347 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.169357 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:02Z","lastTransitionTime":"2026-02-26T09:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.271554 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.271622 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.271634 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.271651 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.271663 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:02Z","lastTransitionTime":"2026-02-26T09:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.374091 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.374124 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.374133 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.374147 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.374159 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:02Z","lastTransitionTime":"2026-02-26T09:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.476361 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.476699 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.476803 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.476904 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.476998 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:02Z","lastTransitionTime":"2026-02-26T09:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.581411 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.581605 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.581642 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.581657 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.581676 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.581687 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:02Z","lastTransitionTime":"2026-02-26T09:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.582062 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.583779 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.584464 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.585641 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.586159 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.586822 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.587928 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.588600 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.589731 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.590356 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.591859 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.592457 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.593174 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.594383 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.595155 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.596231 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.596705 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.597317 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.598589 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.599071 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.600247 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.600822 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.602015 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.602475 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.603897 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.606012 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.606709 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.608181 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.608987 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.610328 4760 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.610556 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.613125 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.614726 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.615372 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.617271 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.618067 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.619198 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.619996 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.621340 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.621839 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.622915 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.623539 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.624687 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.625239 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.626234 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.626774 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.628149 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.628770 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.629790 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.630302 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.631422 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.632108 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.632622 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.683924 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.683966 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.684008 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.684024 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.684035 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:02Z","lastTransitionTime":"2026-02-26T09:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.786558 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.787052 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.787150 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.787250 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.787325 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:02Z","lastTransitionTime":"2026-02-26T09:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.889472 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.889507 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.889516 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.889529 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.889538 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:02Z","lastTransitionTime":"2026-02-26T09:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.896219 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"57ad880717fbad679268dc40d7c2be7b7287855bae5117b596829c03fd6ca99a"} Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.896255 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c56c3f8340698fc4c1f81530a6f3e0e1ed6a4c2f02ee3e59eae83d26d2358177"} Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.897533 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8903dadf4bf42c645aff5c1421be2318d6a87c6d2675863a5f100305f7cdd176"} Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.909683 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:44:02Z is after 2025-08-24T17:21:41Z" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.923053 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:44:02Z is after 2025-08-24T17:21:41Z" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.938792 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:44:02Z is after 2025-08-24T17:21:41Z" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.953056 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:44:02Z is after 2025-08-24T17:21:41Z" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.964125 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:44:02Z is after 2025-08-24T17:21:41Z" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.976655 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57ad880717fbad679268dc40d7c2be7b7287855bae5117b596829c03fd6ca99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T09:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c56c3f8340698fc4c1f81530a6f3e0e1ed6a4c2f02ee3e59eae83d26d2358177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T09:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:44:02Z is after 2025-08-24T17:21:41Z" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.988436 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:44:02Z is after 2025-08-24T17:21:41Z" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.992740 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.992957 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.993163 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.993395 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:02 crc kubenswrapper[4760]: I0226 09:44:02.993609 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:02Z","lastTransitionTime":"2026-02-26T09:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.000012 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:44:02Z is after 2025-08-24T17:21:41Z" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.011776 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57ad880717fbad679268dc40d7c2be7b7287855bae5117b596829c03fd6ca99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T09:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c56c3f8340698fc4c1f81530a6f3e0e1ed6a4c2f02ee3e59eae83d26d2358177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T09:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:44:03Z is after 2025-08-24T17:21:41Z" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.022607 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:44:03Z is after 2025-08-24T17:21:41Z" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.034054 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:44:03Z is after 2025-08-24T17:21:41Z" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.045737 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T09:44:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8903dadf4bf42c645aff5c1421be2318d6a87c6d2675863a5f100305f7cdd176\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T09:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T09:44:03Z is after 2025-08-24T17:21:41Z" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.095158 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.095299 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.095362 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.095422 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.095478 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:03Z","lastTransitionTime":"2026-02-26T09:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.140840 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.140975 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.141008 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.141035 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.141058 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:03 crc kubenswrapper[4760]: E0226 09:44:03.141133 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 09:44:03 crc kubenswrapper[4760]: E0226 09:44:03.141191 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:05.141172419 +0000 UTC m=+90.275117912 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 09:44:03 crc kubenswrapper[4760]: E0226 09:44:03.141702 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 09:44:03 crc kubenswrapper[4760]: E0226 09:44:03.141726 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 09:44:03 crc kubenswrapper[4760]: E0226 09:44:03.141739 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 09:44:03 crc kubenswrapper[4760]: E0226 09:44:03.141774 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:05.141763846 +0000 UTC m=+90.275709339 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 09:44:03 crc kubenswrapper[4760]: E0226 09:44:03.141820 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:05.141810287 +0000 UTC m=+90.275755780 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:03 crc kubenswrapper[4760]: E0226 09:44:03.141958 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 09:44:03 crc kubenswrapper[4760]: E0226 09:44:03.142027 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 09:44:03 crc kubenswrapper[4760]: E0226 09:44:03.142046 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 09:44:03 crc kubenswrapper[4760]: E0226 09:44:03.142138 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:05.142111865 +0000 UTC m=+90.276057638 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 09:44:03 crc kubenswrapper[4760]: E0226 09:44:03.141993 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 09:44:03 crc kubenswrapper[4760]: E0226 09:44:03.142203 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:05.142194808 +0000 UTC m=+90.276140511 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.199118 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.199159 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.199171 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.199191 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.199204 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:03Z","lastTransitionTime":"2026-02-26T09:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.301166 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.301206 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.301218 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.301235 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.301247 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:03Z","lastTransitionTime":"2026-02-26T09:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.403196 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.403229 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.403241 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.403258 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.403268 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:03Z","lastTransitionTime":"2026-02-26T09:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.505904 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.505954 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.505967 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.505984 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.505996 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:03Z","lastTransitionTime":"2026-02-26T09:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.575419 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:03 crc kubenswrapper[4760]: E0226 09:44:03.575563 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.575982 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 09:44:03 crc kubenswrapper[4760]: E0226 09:44:03.576054 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.576117 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 09:44:03 crc kubenswrapper[4760]: E0226 09:44:03.576177 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.608727 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.608761 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.608777 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.608793 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.608806 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:03Z","lastTransitionTime":"2026-02-26T09:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.710974 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.711000 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.711008 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.711021 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.711029 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:03Z","lastTransitionTime":"2026-02-26T09:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.812836 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.812879 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.812891 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.812906 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.812914 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:03Z","lastTransitionTime":"2026-02-26T09:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.915661 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.916079 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.916206 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.916308 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:03 crc kubenswrapper[4760]: I0226 09:44:03.916376 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:03Z","lastTransitionTime":"2026-02-26T09:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.018786 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.019126 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.019260 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.019376 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.019475 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:04Z","lastTransitionTime":"2026-02-26T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.122453 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.122497 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.122509 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.122524 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.122533 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:04Z","lastTransitionTime":"2026-02-26T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.244141 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.244192 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.244204 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.244234 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.244249 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:04Z","lastTransitionTime":"2026-02-26T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.353497 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.353538 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.353549 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.353603 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.353617 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:04Z","lastTransitionTime":"2026-02-26T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.456067 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.456098 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.456107 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.456120 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.456128 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:04Z","lastTransitionTime":"2026-02-26T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.558687 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.558725 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.558736 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.558751 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.558762 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:04Z","lastTransitionTime":"2026-02-26T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.661627 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.661683 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.661716 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.661738 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.661751 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:04Z","lastTransitionTime":"2026-02-26T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.763477 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.763530 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.763538 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.763554 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.763564 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:04Z","lastTransitionTime":"2026-02-26T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.865195 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.865243 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.865254 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.865270 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.865282 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:04Z","lastTransitionTime":"2026-02-26T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.903130 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9b8a0cb870b3dda90f93e65d88449da0590724c511c818f5c1725725462b8439"} Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.967215 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.967446 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.967514 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.967589 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:04 crc kubenswrapper[4760]: I0226 09:44:04.967659 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:04Z","lastTransitionTime":"2026-02-26T09:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.070272 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.070302 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.070311 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.070336 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.070346 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:05Z","lastTransitionTime":"2026-02-26T09:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.158675 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.158781 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.158825 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.158863 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 09:44:05 crc kubenswrapper[4760]: E0226 09:44:05.158879 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:09.158853873 +0000 UTC m=+94.292799386 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.158929 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:05 crc kubenswrapper[4760]: E0226 09:44:05.158975 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 09:44:05 crc kubenswrapper[4760]: E0226 09:44:05.158995 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 09:44:05 crc kubenswrapper[4760]: E0226 09:44:05.159004 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 09:44:05 crc kubenswrapper[4760]: E0226 09:44:05.159017 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 09:44:05 crc kubenswrapper[4760]: E0226 09:44:05.159027 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:09.159014778 +0000 UTC m=+94.292960271 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 09:44:05 crc kubenswrapper[4760]: E0226 09:44:05.159034 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 09:44:05 crc kubenswrapper[4760]: E0226 09:44:05.159052 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:09.159038119 +0000 UTC m=+94.292983622 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 09:44:05 crc kubenswrapper[4760]: E0226 09:44:05.159077 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:09.159063949 +0000 UTC m=+94.293009452 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 09:44:05 crc kubenswrapper[4760]: E0226 09:44:05.159131 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 09:44:05 crc kubenswrapper[4760]: E0226 09:44:05.159154 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 09:44:05 crc kubenswrapper[4760]: E0226 09:44:05.159168 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 09:44:05 crc kubenswrapper[4760]: E0226 09:44:05.159215 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:09.159202213 +0000 UTC m=+94.293147726 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.172981 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.173020 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.173032 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.173052 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.173065 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:05Z","lastTransitionTime":"2026-02-26T09:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.276474 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.276513 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.276522 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.276537 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.276546 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:05Z","lastTransitionTime":"2026-02-26T09:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.379167 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.379205 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.379217 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.379236 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.379248 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:05Z","lastTransitionTime":"2026-02-26T09:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.482192 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.482261 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.482273 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.482290 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.482301 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:05Z","lastTransitionTime":"2026-02-26T09:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.575850 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.575850 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.575883 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:05 crc kubenswrapper[4760]: E0226 09:44:05.576180 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 09:44:05 crc kubenswrapper[4760]: E0226 09:44:05.576312 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 09:44:05 crc kubenswrapper[4760]: E0226 09:44:05.576371 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.585219 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.585264 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.585274 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.585287 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.585296 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:05Z","lastTransitionTime":"2026-02-26T09:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.589620 4760 scope.go:117] "RemoveContainer" containerID="6a42004a8b808c4c7fbf7c8f2872c56e8a3de2367477d08143604816366a17b5" Feb 26 09:44:05 crc kubenswrapper[4760]: E0226 09:44:05.589905 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.590533 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.687277 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.687308 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.687316 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.687330 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.687339 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:05Z","lastTransitionTime":"2026-02-26T09:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.789713 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.789774 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.789792 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.789815 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.789833 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:05Z","lastTransitionTime":"2026-02-26T09:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.892390 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.892477 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.892491 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.892509 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.892521 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:05Z","lastTransitionTime":"2026-02-26T09:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.906783 4760 scope.go:117] "RemoveContainer" containerID="6a42004a8b808c4c7fbf7c8f2872c56e8a3de2367477d08143604816366a17b5" Feb 26 09:44:05 crc kubenswrapper[4760]: E0226 09:44:05.907025 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.994558 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.994612 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.994624 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.994639 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:05 crc kubenswrapper[4760]: I0226 09:44:05.994651 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:05Z","lastTransitionTime":"2026-02-26T09:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.096543 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.096627 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.096642 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.096664 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.096681 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:06Z","lastTransitionTime":"2026-02-26T09:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.198874 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.198984 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.199000 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.199014 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.199026 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:06Z","lastTransitionTime":"2026-02-26T09:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.302479 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.302523 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.302813 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.302840 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.302857 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:06Z","lastTransitionTime":"2026-02-26T09:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.406716 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.406782 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.406797 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.406818 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.406835 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:06Z","lastTransitionTime":"2026-02-26T09:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.508950 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.509033 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.509053 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.509082 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.509095 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:06Z","lastTransitionTime":"2026-02-26T09:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.614104 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.614141 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.614152 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.614169 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.614181 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:06Z","lastTransitionTime":"2026-02-26T09:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.716842 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.716873 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.716883 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.716897 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.716906 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:06Z","lastTransitionTime":"2026-02-26T09:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.789224 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.789310 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.789323 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.789354 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 09:44:06 crc kubenswrapper[4760]: I0226 09:44:06.789368 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T09:44:06Z","lastTransitionTime":"2026-02-26T09:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 09:44:07 crc kubenswrapper[4760]: I0226 09:44:07.506711 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 26 09:44:07 crc kubenswrapper[4760]: I0226 09:44:07.513817 4760 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 26 09:44:07 crc kubenswrapper[4760]: I0226 09:44:07.575709 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:07 crc kubenswrapper[4760]: I0226 09:44:07.575733 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 09:44:07 crc kubenswrapper[4760]: E0226 09:44:07.575841 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 09:44:07 crc kubenswrapper[4760]: I0226 09:44:07.575879 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 09:44:07 crc kubenswrapper[4760]: E0226 09:44:07.575969 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 09:44:07 crc kubenswrapper[4760]: E0226 09:44:07.576095 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 09:44:07 crc kubenswrapper[4760]: I0226 09:44:07.585244 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 26 09:44:07 crc kubenswrapper[4760]: I0226 09:44:07.587065 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 26 09:44:09 crc kubenswrapper[4760]: I0226 09:44:09.197823 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:09 crc kubenswrapper[4760]: I0226 09:44:09.197900 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:09 crc kubenswrapper[4760]: I0226 09:44:09.197924 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 09:44:09 crc kubenswrapper[4760]: E0226 09:44:09.197945 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:17.197929059 +0000 UTC m=+102.331874552 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:09 crc kubenswrapper[4760]: I0226 09:44:09.197965 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 09:44:09 crc kubenswrapper[4760]: I0226 09:44:09.197989 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:09 crc kubenswrapper[4760]: E0226 09:44:09.198011 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 09:44:09 crc kubenswrapper[4760]: E0226 09:44:09.198022 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 09:44:09 crc kubenswrapper[4760]: E0226 09:44:09.198032 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 09:44:09 crc kubenswrapper[4760]: E0226 09:44:09.198044 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 09:44:09 crc kubenswrapper[4760]: E0226 09:44:09.198060 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 09:44:09 crc kubenswrapper[4760]: E0226 09:44:09.198093 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:17.198084314 +0000 UTC m=+102.332029807 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 09:44:09 crc kubenswrapper[4760]: E0226 09:44:09.198105 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:17.198100134 +0000 UTC m=+102.332045627 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 09:44:09 crc kubenswrapper[4760]: E0226 09:44:09.198104 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 09:44:09 crc kubenswrapper[4760]: E0226 09:44:09.198130 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 09:44:09 crc kubenswrapper[4760]: E0226 09:44:09.198141 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 09:44:09 crc kubenswrapper[4760]: E0226 09:44:09.198115 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:17.198110974 +0000 UTC m=+102.332056467 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 09:44:09 crc kubenswrapper[4760]: E0226 09:44:09.198182 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:17.198170036 +0000 UTC m=+102.332115529 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 09:44:09 crc kubenswrapper[4760]: I0226 09:44:09.576022 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:09 crc kubenswrapper[4760]: I0226 09:44:09.576066 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 09:44:09 crc kubenswrapper[4760]: I0226 09:44:09.576112 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 09:44:09 crc kubenswrapper[4760]: E0226 09:44:09.576150 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 09:44:09 crc kubenswrapper[4760]: E0226 09:44:09.576299 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 09:44:09 crc kubenswrapper[4760]: E0226 09:44:09.576408 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 09:44:11 crc kubenswrapper[4760]: I0226 09:44:11.575480 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:11 crc kubenswrapper[4760]: I0226 09:44:11.575492 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 09:44:11 crc kubenswrapper[4760]: E0226 09:44:11.575643 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 09:44:11 crc kubenswrapper[4760]: E0226 09:44:11.575764 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 09:44:11 crc kubenswrapper[4760]: I0226 09:44:11.575507 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 09:44:11 crc kubenswrapper[4760]: E0226 09:44:11.575987 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 09:44:13 crc kubenswrapper[4760]: I0226 09:44:13.575838 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:13 crc kubenswrapper[4760]: I0226 09:44:13.575911 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 09:44:13 crc kubenswrapper[4760]: I0226 09:44:13.575942 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 09:44:13 crc kubenswrapper[4760]: E0226 09:44:13.575989 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 09:44:13 crc kubenswrapper[4760]: E0226 09:44:13.576104 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 09:44:13 crc kubenswrapper[4760]: E0226 09:44:13.576204 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 09:44:15 crc kubenswrapper[4760]: I0226 09:44:15.575820 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 09:44:15 crc kubenswrapper[4760]: I0226 09:44:15.575882 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:15 crc kubenswrapper[4760]: E0226 09:44:15.575989 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 09:44:15 crc kubenswrapper[4760]: I0226 09:44:15.575902 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 09:44:15 crc kubenswrapper[4760]: E0226 09:44:15.576059 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 09:44:15 crc kubenswrapper[4760]: E0226 09:44:15.576131 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.298304 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-7z9bk"] Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.298949 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7z9bk" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.301526 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.301772 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.301849 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.313919 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-2fsxp"] Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.314528 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-rqxw2"] Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.315006 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.315512 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-b8nmr"] Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.315897 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.317086 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-b8nmr" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.318639 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.318774 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.318823 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.318972 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.320151 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.320864 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.320873 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.321034 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.321227 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.321258 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.321349 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.323549 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.332810 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-db5w8"] Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.334099 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.336321 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.336538 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.336633 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.336714 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.336843 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.339166 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.341534 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.349864 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=9.349827355 podStartE2EDuration="9.349827355s" podCreationTimestamp="2026-02-26 09:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:16.334351585 +0000 UTC m=+101.468297098" watchObservedRunningTime="2026-02-26 09:44:16.349827355 +0000 UTC m=+101.483772848" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.359629 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-var-lib-openvswitch\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.359676 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-log-socket\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.359702 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-host-var-lib-cni-multus\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.359727 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndpb6\" (UniqueName: \"kubernetes.io/projected/62f749b1-23a5-43f1-8568-b98b688944fc-kube-api-access-ndpb6\") pod \"machine-config-daemon-2fsxp\" (UID: \"62f749b1-23a5-43f1-8568-b98b688944fc\") " pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.359750 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-host-var-lib-kubelet\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.359776 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/62f749b1-23a5-43f1-8568-b98b688944fc-proxy-tls\") pod \"machine-config-daemon-2fsxp\" (UID: \"62f749b1-23a5-43f1-8568-b98b688944fc\") " pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.359798 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-multus-conf-dir\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.359819 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-multus-daemon-config\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.359852 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8-os-release\") pod \"multus-additional-cni-plugins-b8nmr\" (UID: \"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8\") " pod="openshift-multus/multus-additional-cni-plugins-b8nmr" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.359872 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-run-systemd\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.359923 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.359953 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-host-run-k8s-cni-cncf-io\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.359973 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-host-var-lib-cni-bin\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.359993 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfrlk\" (UniqueName: \"kubernetes.io/projected/5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8-kube-api-access-jfrlk\") pod \"multus-additional-cni-plugins-b8nmr\" (UID: \"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8\") " pod="openshift-multus/multus-additional-cni-plugins-b8nmr" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360008 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-os-release\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360025 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6724eeca-0f20-4e53-91f2-d1b6fc3fb48e-hosts-file\") pod \"node-resolver-7z9bk\" (UID: \"6724eeca-0f20-4e53-91f2-d1b6fc3fb48e\") " pod="openshift-dns/node-resolver-7z9bk" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360038 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-cnibin\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360054 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8-cnibin\") pod \"multus-additional-cni-plugins-b8nmr\" (UID: \"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8\") " pod="openshift-multus/multus-additional-cni-plugins-b8nmr" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360067 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-node-log\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360117 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-host-cni-bin\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360148 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-ovn-node-metrics-cert\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360174 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht5sb\" (UniqueName: \"kubernetes.io/projected/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-kube-api-access-ht5sb\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360199 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fw6g\" (UniqueName: \"kubernetes.io/projected/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-kube-api-access-9fw6g\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360225 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8-cni-binary-copy\") pod \"multus-additional-cni-plugins-b8nmr\" (UID: \"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8\") " pod="openshift-multus/multus-additional-cni-plugins-b8nmr" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360244 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-b8nmr\" (UID: \"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8\") " pod="openshift-multus/multus-additional-cni-plugins-b8nmr" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360264 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-host-slash\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360297 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/62f749b1-23a5-43f1-8568-b98b688944fc-rootfs\") pod \"machine-config-daemon-2fsxp\" (UID: \"62f749b1-23a5-43f1-8568-b98b688944fc\") " pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360316 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/62f749b1-23a5-43f1-8568-b98b688944fc-mcd-auth-proxy-config\") pod \"machine-config-daemon-2fsxp\" (UID: \"62f749b1-23a5-43f1-8568-b98b688944fc\") " pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360349 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-b8nmr\" (UID: \"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8\") " pod="openshift-multus/multus-additional-cni-plugins-b8nmr" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360370 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-run-openvswitch\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360386 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-hostroot\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360431 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-env-overrides\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360449 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-ovnkube-script-lib\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360464 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-system-cni-dir\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360486 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-host-run-netns\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360516 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-etc-openvswitch\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360551 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-run-ovn\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360586 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-host-cni-netd\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360605 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-host-run-netns\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360628 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8-system-cni-dir\") pod \"multus-additional-cni-plugins-b8nmr\" (UID: \"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8\") " pod="openshift-multus/multus-additional-cni-plugins-b8nmr" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360649 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6s45\" (UniqueName: \"kubernetes.io/projected/6724eeca-0f20-4e53-91f2-d1b6fc3fb48e-kube-api-access-q6s45\") pod \"node-resolver-7z9bk\" (UID: \"6724eeca-0f20-4e53-91f2-d1b6fc3fb48e\") " pod="openshift-dns/node-resolver-7z9bk" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360669 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-systemd-units\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360688 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-ovnkube-config\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360704 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-multus-cni-dir\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360720 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-cni-binary-copy\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360739 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-host-run-multus-certs\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360754 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-host-kubelet\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360768 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-etc-kubernetes\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360781 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-host-run-ovn-kubernetes\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.360797 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-multus-socket-dir-parent\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.362430 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=9.362418414 podStartE2EDuration="9.362418414s" podCreationTimestamp="2026-02-26 09:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:16.362019082 +0000 UTC m=+101.495964565" watchObservedRunningTime="2026-02-26 09:44:16.362418414 +0000 UTC m=+101.496363907" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.404676 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlxrp"] Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.404951 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlxrp" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.406938 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.407323 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.407609 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.408290 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462189 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-var-lib-openvswitch\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462227 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-log-socket\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462246 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-host-var-lib-cni-multus\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462262 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndpb6\" (UniqueName: \"kubernetes.io/projected/62f749b1-23a5-43f1-8568-b98b688944fc-kube-api-access-ndpb6\") pod \"machine-config-daemon-2fsxp\" (UID: \"62f749b1-23a5-43f1-8568-b98b688944fc\") " pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462277 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-host-var-lib-kubelet\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462300 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/62f749b1-23a5-43f1-8568-b98b688944fc-proxy-tls\") pod \"machine-config-daemon-2fsxp\" (UID: \"62f749b1-23a5-43f1-8568-b98b688944fc\") " pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462317 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-multus-conf-dir\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462333 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-multus-daemon-config\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462349 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-host-run-k8s-cni-cncf-io\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462381 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-host-var-lib-cni-bin\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462396 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8-os-release\") pod \"multus-additional-cni-plugins-b8nmr\" (UID: \"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8\") " pod="openshift-multus/multus-additional-cni-plugins-b8nmr" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462409 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-run-systemd\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462423 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462441 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfrlk\" (UniqueName: \"kubernetes.io/projected/5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8-kube-api-access-jfrlk\") pod \"multus-additional-cni-plugins-b8nmr\" (UID: \"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8\") " pod="openshift-multus/multus-additional-cni-plugins-b8nmr" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462455 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-os-release\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462469 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6724eeca-0f20-4e53-91f2-d1b6fc3fb48e-hosts-file\") pod \"node-resolver-7z9bk\" (UID: \"6724eeca-0f20-4e53-91f2-d1b6fc3fb48e\") " pod="openshift-dns/node-resolver-7z9bk" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462483 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-cnibin\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462497 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-ovn-node-metrics-cert\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462511 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht5sb\" (UniqueName: \"kubernetes.io/projected/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-kube-api-access-ht5sb\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462526 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fw6g\" (UniqueName: \"kubernetes.io/projected/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-kube-api-access-9fw6g\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462543 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8-cnibin\") pod \"multus-additional-cni-plugins-b8nmr\" (UID: \"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8\") " pod="openshift-multus/multus-additional-cni-plugins-b8nmr" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462556 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-node-log\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462588 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-host-cni-bin\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462610 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-host-slash\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462632 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8-cni-binary-copy\") pod \"multus-additional-cni-plugins-b8nmr\" (UID: \"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8\") " pod="openshift-multus/multus-additional-cni-plugins-b8nmr" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462651 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-b8nmr\" (UID: \"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8\") " pod="openshift-multus/multus-additional-cni-plugins-b8nmr" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462671 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/538e1ce2-30c1-45de-a89d-04f881a9f694-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qlxrp\" (UID: \"538e1ce2-30c1-45de-a89d-04f881a9f694\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlxrp" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462696 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/62f749b1-23a5-43f1-8568-b98b688944fc-rootfs\") pod \"machine-config-daemon-2fsxp\" (UID: \"62f749b1-23a5-43f1-8568-b98b688944fc\") " pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462711 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/62f749b1-23a5-43f1-8568-b98b688944fc-mcd-auth-proxy-config\") pod \"machine-config-daemon-2fsxp\" (UID: \"62f749b1-23a5-43f1-8568-b98b688944fc\") " pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462733 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-b8nmr\" (UID: \"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8\") " pod="openshift-multus/multus-additional-cni-plugins-b8nmr" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462747 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-run-openvswitch\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462762 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-hostroot\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462785 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/538e1ce2-30c1-45de-a89d-04f881a9f694-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qlxrp\" (UID: \"538e1ce2-30c1-45de-a89d-04f881a9f694\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlxrp" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462799 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-system-cni-dir\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462815 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-env-overrides\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462830 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-ovnkube-script-lib\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462855 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-host-cni-netd\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462870 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-host-run-netns\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462895 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8-system-cni-dir\") pod \"multus-additional-cni-plugins-b8nmr\" (UID: \"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8\") " pod="openshift-multus/multus-additional-cni-plugins-b8nmr" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462911 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-host-run-netns\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462924 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-etc-openvswitch\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462938 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-run-ovn\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462951 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-multus-cni-dir\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462966 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-cni-binary-copy\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462980 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-host-run-multus-certs\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.462996 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6s45\" (UniqueName: \"kubernetes.io/projected/6724eeca-0f20-4e53-91f2-d1b6fc3fb48e-kube-api-access-q6s45\") pod \"node-resolver-7z9bk\" (UID: \"6724eeca-0f20-4e53-91f2-d1b6fc3fb48e\") " pod="openshift-dns/node-resolver-7z9bk" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.463010 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-systemd-units\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.463026 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-ovnkube-config\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.463041 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/538e1ce2-30c1-45de-a89d-04f881a9f694-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qlxrp\" (UID: \"538e1ce2-30c1-45de-a89d-04f881a9f694\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlxrp" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.463056 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-host-kubelet\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.463071 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-etc-kubernetes\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.463086 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/538e1ce2-30c1-45de-a89d-04f881a9f694-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qlxrp\" (UID: \"538e1ce2-30c1-45de-a89d-04f881a9f694\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlxrp" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.463102 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-host-run-ovn-kubernetes\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.463117 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-multus-socket-dir-parent\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.463133 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/538e1ce2-30c1-45de-a89d-04f881a9f694-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qlxrp\" (UID: \"538e1ce2-30c1-45de-a89d-04f881a9f694\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlxrp" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.463200 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-var-lib-openvswitch\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.463223 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-log-socket\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.463244 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-host-var-lib-cni-multus\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.463458 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-host-var-lib-kubelet\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.464162 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-host-run-multus-certs\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.464160 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-run-ovn\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.464392 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-run-systemd\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.464473 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-run-openvswitch\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.464529 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-host-cni-netd\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.464691 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-host-run-netns\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.464724 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-hostroot\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.464734 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-multus-cni-dir\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.464953 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-ovnkube-config\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.465119 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8-system-cni-dir\") pod \"multus-additional-cni-plugins-b8nmr\" (UID: \"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8\") " pod="openshift-multus/multus-additional-cni-plugins-b8nmr" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.465134 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/62f749b1-23a5-43f1-8568-b98b688944fc-mcd-auth-proxy-config\") pod \"machine-config-daemon-2fsxp\" (UID: \"62f749b1-23a5-43f1-8568-b98b688944fc\") " pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.465166 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-etc-kubernetes\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.465198 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-host-run-k8s-cni-cncf-io\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.465248 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-system-cni-dir\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.465265 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-multus-conf-dir\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.465302 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-host-run-netns\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.465499 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-host-kubelet\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.465546 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6724eeca-0f20-4e53-91f2-d1b6fc3fb48e-hosts-file\") pod \"node-resolver-7z9bk\" (UID: \"6724eeca-0f20-4e53-91f2-d1b6fc3fb48e\") " pod="openshift-dns/node-resolver-7z9bk" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.465595 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-cnibin\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.465628 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-host-run-ovn-kubernetes\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.465673 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-os-release\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.465678 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-cni-binary-copy\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.465700 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-ovnkube-script-lib\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.465710 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-multus-socket-dir-parent\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.465739 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-etc-openvswitch\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.465756 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-env-overrides\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.465767 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.465796 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-host-slash\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.465899 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-systemd-units\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.466002 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8-cnibin\") pod \"multus-additional-cni-plugins-b8nmr\" (UID: \"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8\") " pod="openshift-multus/multus-additional-cni-plugins-b8nmr" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.466048 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-node-log\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.466134 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-b8nmr\" (UID: \"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8\") " pod="openshift-multus/multus-additional-cni-plugins-b8nmr" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.466244 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/62f749b1-23a5-43f1-8568-b98b688944fc-rootfs\") pod \"machine-config-daemon-2fsxp\" (UID: \"62f749b1-23a5-43f1-8568-b98b688944fc\") " pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.466332 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-host-var-lib-cni-bin\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.466394 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-multus-daemon-config\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.466435 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8-os-release\") pod \"multus-additional-cni-plugins-b8nmr\" (UID: \"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8\") " pod="openshift-multus/multus-additional-cni-plugins-b8nmr" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.466409 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-host-cni-bin\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.466562 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8-cni-binary-copy\") pod \"multus-additional-cni-plugins-b8nmr\" (UID: \"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8\") " pod="openshift-multus/multus-additional-cni-plugins-b8nmr" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.466908 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8-tuning-conf-dir\") pod \"multus-additional-cni-plugins-b8nmr\" (UID: \"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8\") " pod="openshift-multus/multus-additional-cni-plugins-b8nmr" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.469205 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-ovn-node-metrics-cert\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.472033 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/62f749b1-23a5-43f1-8568-b98b688944fc-proxy-tls\") pod \"machine-config-daemon-2fsxp\" (UID: \"62f749b1-23a5-43f1-8568-b98b688944fc\") " pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.483182 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6s45\" (UniqueName: \"kubernetes.io/projected/6724eeca-0f20-4e53-91f2-d1b6fc3fb48e-kube-api-access-q6s45\") pod \"node-resolver-7z9bk\" (UID: \"6724eeca-0f20-4e53-91f2-d1b6fc3fb48e\") " pod="openshift-dns/node-resolver-7z9bk" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.483565 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fw6g\" (UniqueName: \"kubernetes.io/projected/55dff70c-b192-4c20-b5e0-4b1ecacfedb0-kube-api-access-9fw6g\") pod \"multus-rqxw2\" (UID: \"55dff70c-b192-4c20-b5e0-4b1ecacfedb0\") " pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.484963 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht5sb\" (UniqueName: \"kubernetes.io/projected/b32a82ca-1cad-4bd9-8a32-fc14618c9c8a-kube-api-access-ht5sb\") pod \"ovnkube-node-db5w8\" (UID: \"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a\") " pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.489720 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfrlk\" (UniqueName: \"kubernetes.io/projected/5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8-kube-api-access-jfrlk\") pod \"multus-additional-cni-plugins-b8nmr\" (UID: \"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8\") " pod="openshift-multus/multus-additional-cni-plugins-b8nmr" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.492308 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndpb6\" (UniqueName: \"kubernetes.io/projected/62f749b1-23a5-43f1-8568-b98b688944fc-kube-api-access-ndpb6\") pod \"machine-config-daemon-2fsxp\" (UID: \"62f749b1-23a5-43f1-8568-b98b688944fc\") " pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.494272 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-rmp5z"] Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.494645 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-rmp5z" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.496074 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.496326 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.496445 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.496561 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.564224 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c2895eda-9920-4bc8-a61a-35bd5c00e91c-host\") pod \"node-ca-rmp5z\" (UID: \"c2895eda-9920-4bc8-a61a-35bd5c00e91c\") " pod="openshift-image-registry/node-ca-rmp5z" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.564266 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c2895eda-9920-4bc8-a61a-35bd5c00e91c-serviceca\") pod \"node-ca-rmp5z\" (UID: \"c2895eda-9920-4bc8-a61a-35bd5c00e91c\") " pod="openshift-image-registry/node-ca-rmp5z" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.564307 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/538e1ce2-30c1-45de-a89d-04f881a9f694-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qlxrp\" (UID: \"538e1ce2-30c1-45de-a89d-04f881a9f694\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlxrp" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.564352 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/538e1ce2-30c1-45de-a89d-04f881a9f694-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qlxrp\" (UID: \"538e1ce2-30c1-45de-a89d-04f881a9f694\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlxrp" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.564402 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/538e1ce2-30c1-45de-a89d-04f881a9f694-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qlxrp\" (UID: \"538e1ce2-30c1-45de-a89d-04f881a9f694\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlxrp" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.564427 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xf6m\" (UniqueName: \"kubernetes.io/projected/c2895eda-9920-4bc8-a61a-35bd5c00e91c-kube-api-access-5xf6m\") pod \"node-ca-rmp5z\" (UID: \"c2895eda-9920-4bc8-a61a-35bd5c00e91c\") " pod="openshift-image-registry/node-ca-rmp5z" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.564467 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/538e1ce2-30c1-45de-a89d-04f881a9f694-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qlxrp\" (UID: \"538e1ce2-30c1-45de-a89d-04f881a9f694\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlxrp" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.564484 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/538e1ce2-30c1-45de-a89d-04f881a9f694-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qlxrp\" (UID: \"538e1ce2-30c1-45de-a89d-04f881a9f694\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlxrp" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.564501 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/538e1ce2-30c1-45de-a89d-04f881a9f694-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qlxrp\" (UID: \"538e1ce2-30c1-45de-a89d-04f881a9f694\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlxrp" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.564953 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/538e1ce2-30c1-45de-a89d-04f881a9f694-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qlxrp\" (UID: \"538e1ce2-30c1-45de-a89d-04f881a9f694\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlxrp" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.565587 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/538e1ce2-30c1-45de-a89d-04f881a9f694-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qlxrp\" (UID: \"538e1ce2-30c1-45de-a89d-04f881a9f694\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlxrp" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.567637 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/538e1ce2-30c1-45de-a89d-04f881a9f694-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qlxrp\" (UID: \"538e1ce2-30c1-45de-a89d-04f881a9f694\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlxrp" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.580027 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/538e1ce2-30c1-45de-a89d-04f881a9f694-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qlxrp\" (UID: \"538e1ce2-30c1-45de-a89d-04f881a9f694\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlxrp" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.615999 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5cnbj"] Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.616458 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5cnbj" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.616833 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7z9bk" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.618775 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.619153 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.629647 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rqxw2" Feb 26 09:44:16 crc kubenswrapper[4760]: W0226 09:44:16.630446 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6724eeca_0f20_4e53_91f2_d1b6fc3fb48e.slice/crio-f7adf458fe67fb1f594fe21440a7214d820c25ac8081afea6907caa93bf1d552 WatchSource:0}: Error finding container f7adf458fe67fb1f594fe21440a7214d820c25ac8081afea6907caa93bf1d552: Status 404 returned error can't find the container with id f7adf458fe67fb1f594fe21440a7214d820c25ac8081afea6907caa93bf1d552 Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.635301 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-6s89j"] Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.635773 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6s89j" Feb 26 09:44:16 crc kubenswrapper[4760]: E0226 09:44:16.635837 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6s89j" podUID="53312298-624c-4f35-bdba-cbbf326775d2" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.638844 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" Feb 26 09:44:16 crc kubenswrapper[4760]: W0226 09:44:16.639829 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55dff70c_b192_4c20_b5e0_4b1ecacfedb0.slice/crio-63dcae083f620304bdd94cd3562a60e5cd96291db0550c41b4469fe451bbdb40 WatchSource:0}: Error finding container 63dcae083f620304bdd94cd3562a60e5cd96291db0550c41b4469fe451bbdb40: Status 404 returned error can't find the container with id 63dcae083f620304bdd94cd3562a60e5cd96291db0550c41b4469fe451bbdb40 Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.647097 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-b8nmr" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.653774 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.664863 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1ded3e37-61b3-424d-9b15-2a7fda6577ad-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5cnbj\" (UID: \"1ded3e37-61b3-424d-9b15-2a7fda6577ad\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5cnbj" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.664911 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1ded3e37-61b3-424d-9b15-2a7fda6577ad-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5cnbj\" (UID: \"1ded3e37-61b3-424d-9b15-2a7fda6577ad\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5cnbj" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.664940 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xf6m\" (UniqueName: \"kubernetes.io/projected/c2895eda-9920-4bc8-a61a-35bd5c00e91c-kube-api-access-5xf6m\") pod \"node-ca-rmp5z\" (UID: \"c2895eda-9920-4bc8-a61a-35bd5c00e91c\") " pod="openshift-image-registry/node-ca-rmp5z" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.664964 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw8fc\" (UniqueName: \"kubernetes.io/projected/53312298-624c-4f35-bdba-cbbf326775d2-kube-api-access-zw8fc\") pod \"network-metrics-daemon-6s89j\" (UID: \"53312298-624c-4f35-bdba-cbbf326775d2\") " pod="openshift-multus/network-metrics-daemon-6s89j" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.664981 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53312298-624c-4f35-bdba-cbbf326775d2-metrics-certs\") pod \"network-metrics-daemon-6s89j\" (UID: \"53312298-624c-4f35-bdba-cbbf326775d2\") " pod="openshift-multus/network-metrics-daemon-6s89j" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.665009 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l82jw\" (UniqueName: \"kubernetes.io/projected/1ded3e37-61b3-424d-9b15-2a7fda6577ad-kube-api-access-l82jw\") pod \"ovnkube-control-plane-749d76644c-5cnbj\" (UID: \"1ded3e37-61b3-424d-9b15-2a7fda6577ad\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5cnbj" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.665044 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c2895eda-9920-4bc8-a61a-35bd5c00e91c-host\") pod \"node-ca-rmp5z\" (UID: \"c2895eda-9920-4bc8-a61a-35bd5c00e91c\") " pod="openshift-image-registry/node-ca-rmp5z" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.665064 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c2895eda-9920-4bc8-a61a-35bd5c00e91c-serviceca\") pod \"node-ca-rmp5z\" (UID: \"c2895eda-9920-4bc8-a61a-35bd5c00e91c\") " pod="openshift-image-registry/node-ca-rmp5z" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.665086 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1ded3e37-61b3-424d-9b15-2a7fda6577ad-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5cnbj\" (UID: \"1ded3e37-61b3-424d-9b15-2a7fda6577ad\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5cnbj" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.665234 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c2895eda-9920-4bc8-a61a-35bd5c00e91c-host\") pod \"node-ca-rmp5z\" (UID: \"c2895eda-9920-4bc8-a61a-35bd5c00e91c\") " pod="openshift-image-registry/node-ca-rmp5z" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.665989 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c2895eda-9920-4bc8-a61a-35bd5c00e91c-serviceca\") pod \"node-ca-rmp5z\" (UID: \"c2895eda-9920-4bc8-a61a-35bd5c00e91c\") " pod="openshift-image-registry/node-ca-rmp5z" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.679849 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xf6m\" (UniqueName: \"kubernetes.io/projected/c2895eda-9920-4bc8-a61a-35bd5c00e91c-kube-api-access-5xf6m\") pod \"node-ca-rmp5z\" (UID: \"c2895eda-9920-4bc8-a61a-35bd5c00e91c\") " pod="openshift-image-registry/node-ca-rmp5z" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.714367 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlxrp" Feb 26 09:44:16 crc kubenswrapper[4760]: W0226 09:44:16.737215 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod538e1ce2_30c1_45de_a89d_04f881a9f694.slice/crio-7df927ef5bf9e3a2064378b51f7cc2fc5916d82b2b262beaf4470ee0bea8af6c WatchSource:0}: Error finding container 7df927ef5bf9e3a2064378b51f7cc2fc5916d82b2b262beaf4470ee0bea8af6c: Status 404 returned error can't find the container with id 7df927ef5bf9e3a2064378b51f7cc2fc5916d82b2b262beaf4470ee0bea8af6c Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.765492 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1ded3e37-61b3-424d-9b15-2a7fda6577ad-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5cnbj\" (UID: \"1ded3e37-61b3-424d-9b15-2a7fda6577ad\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5cnbj" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.765544 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1ded3e37-61b3-424d-9b15-2a7fda6577ad-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5cnbj\" (UID: \"1ded3e37-61b3-424d-9b15-2a7fda6577ad\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5cnbj" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.765601 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1ded3e37-61b3-424d-9b15-2a7fda6577ad-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5cnbj\" (UID: \"1ded3e37-61b3-424d-9b15-2a7fda6577ad\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5cnbj" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.765647 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw8fc\" (UniqueName: \"kubernetes.io/projected/53312298-624c-4f35-bdba-cbbf326775d2-kube-api-access-zw8fc\") pod \"network-metrics-daemon-6s89j\" (UID: \"53312298-624c-4f35-bdba-cbbf326775d2\") " pod="openshift-multus/network-metrics-daemon-6s89j" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.765681 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53312298-624c-4f35-bdba-cbbf326775d2-metrics-certs\") pod \"network-metrics-daemon-6s89j\" (UID: \"53312298-624c-4f35-bdba-cbbf326775d2\") " pod="openshift-multus/network-metrics-daemon-6s89j" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.765696 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l82jw\" (UniqueName: \"kubernetes.io/projected/1ded3e37-61b3-424d-9b15-2a7fda6577ad-kube-api-access-l82jw\") pod \"ovnkube-control-plane-749d76644c-5cnbj\" (UID: \"1ded3e37-61b3-424d-9b15-2a7fda6577ad\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5cnbj" Feb 26 09:44:16 crc kubenswrapper[4760]: E0226 09:44:16.766216 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 09:44:16 crc kubenswrapper[4760]: E0226 09:44:16.766277 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53312298-624c-4f35-bdba-cbbf326775d2-metrics-certs podName:53312298-624c-4f35-bdba-cbbf326775d2 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:17.266260594 +0000 UTC m=+102.400206087 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/53312298-624c-4f35-bdba-cbbf326775d2-metrics-certs") pod "network-metrics-daemon-6s89j" (UID: "53312298-624c-4f35-bdba-cbbf326775d2") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.767924 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1ded3e37-61b3-424d-9b15-2a7fda6577ad-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5cnbj\" (UID: \"1ded3e37-61b3-424d-9b15-2a7fda6577ad\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5cnbj" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.768487 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1ded3e37-61b3-424d-9b15-2a7fda6577ad-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5cnbj\" (UID: \"1ded3e37-61b3-424d-9b15-2a7fda6577ad\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5cnbj" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.770607 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1ded3e37-61b3-424d-9b15-2a7fda6577ad-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5cnbj\" (UID: \"1ded3e37-61b3-424d-9b15-2a7fda6577ad\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5cnbj" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.788407 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw8fc\" (UniqueName: \"kubernetes.io/projected/53312298-624c-4f35-bdba-cbbf326775d2-kube-api-access-zw8fc\") pod \"network-metrics-daemon-6s89j\" (UID: \"53312298-624c-4f35-bdba-cbbf326775d2\") " pod="openshift-multus/network-metrics-daemon-6s89j" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.813823 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l82jw\" (UniqueName: \"kubernetes.io/projected/1ded3e37-61b3-424d-9b15-2a7fda6577ad-kube-api-access-l82jw\") pod \"ovnkube-control-plane-749d76644c-5cnbj\" (UID: \"1ded3e37-61b3-424d-9b15-2a7fda6577ad\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5cnbj" Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.816174 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-rmp5z" Feb 26 09:44:16 crc kubenswrapper[4760]: W0226 09:44:16.863637 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2895eda_9920_4bc8_a61a_35bd5c00e91c.slice/crio-70f8ff84bee30e9476460499411f5f0d47e550f8f1b830ff11d01a18435a1d60 WatchSource:0}: Error finding container 70f8ff84bee30e9476460499411f5f0d47e550f8f1b830ff11d01a18435a1d60: Status 404 returned error can't find the container with id 70f8ff84bee30e9476460499411f5f0d47e550f8f1b830ff11d01a18435a1d60 Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.927479 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5cnbj" Feb 26 09:44:16 crc kubenswrapper[4760]: W0226 09:44:16.941450 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ded3e37_61b3_424d_9b15_2a7fda6577ad.slice/crio-ef2d71eedbde1c98f170dd20e599708e83fbf2707c449d4362b9bcb256015d52 WatchSource:0}: Error finding container ef2d71eedbde1c98f170dd20e599708e83fbf2707c449d4362b9bcb256015d52: Status 404 returned error can't find the container with id ef2d71eedbde1c98f170dd20e599708e83fbf2707c449d4362b9bcb256015d52 Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.966314 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rqxw2" event={"ID":"55dff70c-b192-4c20-b5e0-4b1ecacfedb0","Type":"ContainerStarted","Data":"b49efd69457f737681cea16bfde064bc2f5c3bc151c4a510ed56a64f46692894"} Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.966355 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rqxw2" event={"ID":"55dff70c-b192-4c20-b5e0-4b1ecacfedb0","Type":"ContainerStarted","Data":"63dcae083f620304bdd94cd3562a60e5cd96291db0550c41b4469fe451bbdb40"} Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.969173 4760 generic.go:334] "Generic (PLEG): container finished" podID="b32a82ca-1cad-4bd9-8a32-fc14618c9c8a" containerID="323a305ee06210ed329ae5152ce8a56f7ff9bcb66a9cd1c58c823cbd699f13e1" exitCode=0 Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.969250 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" event={"ID":"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a","Type":"ContainerDied","Data":"323a305ee06210ed329ae5152ce8a56f7ff9bcb66a9cd1c58c823cbd699f13e1"} Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.969289 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" event={"ID":"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a","Type":"ContainerStarted","Data":"92cc734633f974b83d86f0cc6fb31a4c9e36fa35a24b3c900552554b6eff5afe"} Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.971258 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b8nmr" event={"ID":"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8","Type":"ContainerStarted","Data":"757f776319d73b95d3916ea66aaa021fe8259e14480d001952215645766f553b"} Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.971299 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b8nmr" event={"ID":"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8","Type":"ContainerStarted","Data":"898fd737752746d282cf7ad761d74bb61bec46b1d97ea4ed910dbc91e1f9de34"} Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.973495 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" event={"ID":"62f749b1-23a5-43f1-8568-b98b688944fc","Type":"ContainerStarted","Data":"90bbb694d73d5c1633fd62001bcced590a9ab3c4f8982737f4370e1da9dac693"} Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.973527 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" event={"ID":"62f749b1-23a5-43f1-8568-b98b688944fc","Type":"ContainerStarted","Data":"f4efbe79637d17378d1e3c83568f1cb588976a61342df5089c0211e4fb3d69b9"} Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.973540 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" event={"ID":"62f749b1-23a5-43f1-8568-b98b688944fc","Type":"ContainerStarted","Data":"2ab11a88f42b8ab2d619a412716601327d27778b3add1026689c9d3180726fb8"} Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.974790 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-rmp5z" event={"ID":"c2895eda-9920-4bc8-a61a-35bd5c00e91c","Type":"ContainerStarted","Data":"70f8ff84bee30e9476460499411f5f0d47e550f8f1b830ff11d01a18435a1d60"} Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.979890 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5cnbj" event={"ID":"1ded3e37-61b3-424d-9b15-2a7fda6577ad","Type":"ContainerStarted","Data":"ef2d71eedbde1c98f170dd20e599708e83fbf2707c449d4362b9bcb256015d52"} Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.981347 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7z9bk" event={"ID":"6724eeca-0f20-4e53-91f2-d1b6fc3fb48e","Type":"ContainerStarted","Data":"1606d77397d97887815d90066ff150efe651fad8c5cb7be5e531748dead5243f"} Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.981375 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7z9bk" event={"ID":"6724eeca-0f20-4e53-91f2-d1b6fc3fb48e","Type":"ContainerStarted","Data":"f7adf458fe67fb1f594fe21440a7214d820c25ac8081afea6907caa93bf1d552"} Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.985030 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlxrp" event={"ID":"538e1ce2-30c1-45de-a89d-04f881a9f694","Type":"ContainerStarted","Data":"5cc2ff9ff512b0324c43aeeee74686b95f216860139a87370e8edcc8c9b3d4f5"} Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.985089 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlxrp" event={"ID":"538e1ce2-30c1-45de-a89d-04f881a9f694","Type":"ContainerStarted","Data":"7df927ef5bf9e3a2064378b51f7cc2fc5916d82b2b262beaf4470ee0bea8af6c"} Feb 26 09:44:16 crc kubenswrapper[4760]: I0226 09:44:16.986406 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-rqxw2" podStartSLOduration=43.986393442 podStartE2EDuration="43.986393442s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:16.983891811 +0000 UTC m=+102.117837324" watchObservedRunningTime="2026-02-26 09:44:16.986393442 +0000 UTC m=+102.120338935" Feb 26 09:44:17 crc kubenswrapper[4760]: I0226 09:44:17.045068 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-7z9bk" podStartSLOduration=45.045050192 podStartE2EDuration="45.045050192s" podCreationTimestamp="2026-02-26 09:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:17.017357494 +0000 UTC m=+102.151302997" watchObservedRunningTime="2026-02-26 09:44:17.045050192 +0000 UTC m=+102.178995685" Feb 26 09:44:17 crc kubenswrapper[4760]: I0226 09:44:17.045767 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" podStartSLOduration=44.045754442 podStartE2EDuration="44.045754442s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:17.036739346 +0000 UTC m=+102.170684849" watchObservedRunningTime="2026-02-26 09:44:17.045754442 +0000 UTC m=+102.179706096" Feb 26 09:44:17 crc kubenswrapper[4760]: I0226 09:44:17.271711 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:17 crc kubenswrapper[4760]: E0226 09:44:17.271877 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:33.271852191 +0000 UTC m=+118.405797684 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:17 crc kubenswrapper[4760]: I0226 09:44:17.272068 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:17 crc kubenswrapper[4760]: I0226 09:44:17.272092 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 09:44:17 crc kubenswrapper[4760]: I0226 09:44:17.272110 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 09:44:17 crc kubenswrapper[4760]: I0226 09:44:17.272137 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53312298-624c-4f35-bdba-cbbf326775d2-metrics-certs\") pod \"network-metrics-daemon-6s89j\" (UID: \"53312298-624c-4f35-bdba-cbbf326775d2\") " pod="openshift-multus/network-metrics-daemon-6s89j" Feb 26 09:44:17 crc kubenswrapper[4760]: I0226 09:44:17.272154 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:17 crc kubenswrapper[4760]: E0226 09:44:17.272235 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 09:44:17 crc kubenswrapper[4760]: E0226 09:44:17.272275 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:33.272263263 +0000 UTC m=+118.406208756 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 09:44:17 crc kubenswrapper[4760]: E0226 09:44:17.272328 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 09:44:17 crc kubenswrapper[4760]: E0226 09:44:17.272353 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:33.272347525 +0000 UTC m=+118.406293018 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 09:44:17 crc kubenswrapper[4760]: E0226 09:44:17.272405 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 09:44:17 crc kubenswrapper[4760]: E0226 09:44:17.272415 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 09:44:17 crc kubenswrapper[4760]: E0226 09:44:17.272424 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 09:44:17 crc kubenswrapper[4760]: E0226 09:44:17.272447 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:33.272440008 +0000 UTC m=+118.406385501 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 09:44:17 crc kubenswrapper[4760]: E0226 09:44:17.272486 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 09:44:17 crc kubenswrapper[4760]: E0226 09:44:17.272495 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 09:44:17 crc kubenswrapper[4760]: E0226 09:44:17.272501 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 09:44:17 crc kubenswrapper[4760]: E0226 09:44:17.272519 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:33.27251372 +0000 UTC m=+118.406459223 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 09:44:17 crc kubenswrapper[4760]: E0226 09:44:17.272553 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 09:44:17 crc kubenswrapper[4760]: E0226 09:44:17.272597 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53312298-624c-4f35-bdba-cbbf326775d2-metrics-certs podName:53312298-624c-4f35-bdba-cbbf326775d2 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:18.272564561 +0000 UTC m=+103.406510054 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/53312298-624c-4f35-bdba-cbbf326775d2-metrics-certs") pod "network-metrics-daemon-6s89j" (UID: "53312298-624c-4f35-bdba-cbbf326775d2") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 09:44:17 crc kubenswrapper[4760]: I0226 09:44:17.576134 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 09:44:17 crc kubenswrapper[4760]: I0226 09:44:17.576166 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 09:44:17 crc kubenswrapper[4760]: E0226 09:44:17.576281 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 09:44:17 crc kubenswrapper[4760]: E0226 09:44:17.576420 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 09:44:17 crc kubenswrapper[4760]: I0226 09:44:17.577254 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:17 crc kubenswrapper[4760]: E0226 09:44:17.577524 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 09:44:17 crc kubenswrapper[4760]: I0226 09:44:17.991415 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" event={"ID":"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a","Type":"ContainerStarted","Data":"c005978dac698070bada0cb76601996065e67723dd82c04f842f51f33b7a4c27"} Feb 26 09:44:17 crc kubenswrapper[4760]: I0226 09:44:17.992520 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" event={"ID":"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a","Type":"ContainerStarted","Data":"f2586f2d374cdf85f1a807b6cdf7efc5e49302ea4051a8a626ce291e47fad736"} Feb 26 09:44:17 crc kubenswrapper[4760]: I0226 09:44:17.992634 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" event={"ID":"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a","Type":"ContainerStarted","Data":"a13b7a68dab796e70b6e9529d7bc4d7135f386bf3cce32e773f6f2517cf4375e"} Feb 26 09:44:17 crc kubenswrapper[4760]: I0226 09:44:17.992717 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" event={"ID":"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a","Type":"ContainerStarted","Data":"48218cdc1ec96b150199bacd10ba1e0eee891e97659e43668737a6231d466b82"} Feb 26 09:44:17 crc kubenswrapper[4760]: I0226 09:44:17.992796 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" event={"ID":"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a","Type":"ContainerStarted","Data":"51352696de1375f7273904854e8edaa686dbfb733758076b9fa7a09b1f9efbdc"} Feb 26 09:44:17 crc kubenswrapper[4760]: I0226 09:44:17.992895 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" event={"ID":"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a","Type":"ContainerStarted","Data":"ce30e5f05560d17aed403c3cc96f9b57947376d889fe5c477b246010c5ea90f1"} Feb 26 09:44:17 crc kubenswrapper[4760]: I0226 09:44:17.994247 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8" containerID="757f776319d73b95d3916ea66aaa021fe8259e14480d001952215645766f553b" exitCode=0 Feb 26 09:44:17 crc kubenswrapper[4760]: I0226 09:44:17.994304 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b8nmr" event={"ID":"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8","Type":"ContainerDied","Data":"757f776319d73b95d3916ea66aaa021fe8259e14480d001952215645766f553b"} Feb 26 09:44:17 crc kubenswrapper[4760]: I0226 09:44:17.997028 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5cnbj" event={"ID":"1ded3e37-61b3-424d-9b15-2a7fda6577ad","Type":"ContainerStarted","Data":"14ad3b8a3c0151a1209217a6a9f6ccd987ac4039f22c2975abbdc8fcec70b4a8"} Feb 26 09:44:17 crc kubenswrapper[4760]: I0226 09:44:17.997056 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5cnbj" event={"ID":"1ded3e37-61b3-424d-9b15-2a7fda6577ad","Type":"ContainerStarted","Data":"340671515f49554a8da8db3c391fcb4736daa393123c1236870dc0dafe19653a"} Feb 26 09:44:17 crc kubenswrapper[4760]: I0226 09:44:17.998914 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-rmp5z" event={"ID":"c2895eda-9920-4bc8-a61a-35bd5c00e91c","Type":"ContainerStarted","Data":"0671da4f76be2e5d0e080152914691c6c24cc7e1ce3e6a5000c16ea2f884d5eb"} Feb 26 09:44:18 crc kubenswrapper[4760]: I0226 09:44:18.018856 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qlxrp" podStartSLOduration=45.018840972 podStartE2EDuration="45.018840972s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:17.088916222 +0000 UTC m=+102.222861735" watchObservedRunningTime="2026-02-26 09:44:18.018840972 +0000 UTC m=+103.152786465" Feb 26 09:44:18 crc kubenswrapper[4760]: I0226 09:44:18.046440 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5cnbj" podStartSLOduration=45.046422458 podStartE2EDuration="45.046422458s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:18.035419334 +0000 UTC m=+103.169364827" watchObservedRunningTime="2026-02-26 09:44:18.046422458 +0000 UTC m=+103.180367951" Feb 26 09:44:18 crc kubenswrapper[4760]: I0226 09:44:18.046841 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-rmp5z" podStartSLOduration=46.046835529 podStartE2EDuration="46.046835529s" podCreationTimestamp="2026-02-26 09:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:18.046541411 +0000 UTC m=+103.180486914" watchObservedRunningTime="2026-02-26 09:44:18.046835529 +0000 UTC m=+103.180781032" Feb 26 09:44:18 crc kubenswrapper[4760]: I0226 09:44:18.283562 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53312298-624c-4f35-bdba-cbbf326775d2-metrics-certs\") pod \"network-metrics-daemon-6s89j\" (UID: \"53312298-624c-4f35-bdba-cbbf326775d2\") " pod="openshift-multus/network-metrics-daemon-6s89j" Feb 26 09:44:18 crc kubenswrapper[4760]: E0226 09:44:18.283767 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 09:44:18 crc kubenswrapper[4760]: E0226 09:44:18.283932 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53312298-624c-4f35-bdba-cbbf326775d2-metrics-certs podName:53312298-624c-4f35-bdba-cbbf326775d2 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:20.283915951 +0000 UTC m=+105.417861434 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/53312298-624c-4f35-bdba-cbbf326775d2-metrics-certs") pod "network-metrics-daemon-6s89j" (UID: "53312298-624c-4f35-bdba-cbbf326775d2") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 09:44:18 crc kubenswrapper[4760]: I0226 09:44:18.575556 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6s89j" Feb 26 09:44:18 crc kubenswrapper[4760]: E0226 09:44:18.575694 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6s89j" podUID="53312298-624c-4f35-bdba-cbbf326775d2" Feb 26 09:44:19 crc kubenswrapper[4760]: I0226 09:44:19.003165 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8" containerID="386c7d5b723200e362d43453e0d12411cde60833043e4a09bf5b4fb66ed9138c" exitCode=0 Feb 26 09:44:19 crc kubenswrapper[4760]: I0226 09:44:19.003247 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b8nmr" event={"ID":"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8","Type":"ContainerDied","Data":"386c7d5b723200e362d43453e0d12411cde60833043e4a09bf5b4fb66ed9138c"} Feb 26 09:44:19 crc kubenswrapper[4760]: I0226 09:44:19.576122 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 09:44:19 crc kubenswrapper[4760]: I0226 09:44:19.576156 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 09:44:19 crc kubenswrapper[4760]: E0226 09:44:19.576341 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 09:44:19 crc kubenswrapper[4760]: E0226 09:44:19.576450 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 09:44:19 crc kubenswrapper[4760]: I0226 09:44:19.576649 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:19 crc kubenswrapper[4760]: E0226 09:44:19.576805 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 09:44:19 crc kubenswrapper[4760]: I0226 09:44:19.752726 4760 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 26 09:44:20 crc kubenswrapper[4760]: I0226 09:44:20.009593 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" event={"ID":"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a","Type":"ContainerStarted","Data":"3fe57ca90827296f8d935c13740e680135bc575ac7ebf2af48dc01bcba8929b9"} Feb 26 09:44:20 crc kubenswrapper[4760]: I0226 09:44:20.011916 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8" containerID="a3439f8d6187103355e63adbd805ce8c06faa78122de14ada411ebd61817e25f" exitCode=0 Feb 26 09:44:20 crc kubenswrapper[4760]: I0226 09:44:20.011972 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b8nmr" event={"ID":"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8","Type":"ContainerDied","Data":"a3439f8d6187103355e63adbd805ce8c06faa78122de14ada411ebd61817e25f"} Feb 26 09:44:20 crc kubenswrapper[4760]: I0226 09:44:20.307203 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53312298-624c-4f35-bdba-cbbf326775d2-metrics-certs\") pod \"network-metrics-daemon-6s89j\" (UID: \"53312298-624c-4f35-bdba-cbbf326775d2\") " pod="openshift-multus/network-metrics-daemon-6s89j" Feb 26 09:44:20 crc kubenswrapper[4760]: E0226 09:44:20.307376 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 09:44:20 crc kubenswrapper[4760]: E0226 09:44:20.307458 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53312298-624c-4f35-bdba-cbbf326775d2-metrics-certs podName:53312298-624c-4f35-bdba-cbbf326775d2 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:24.307434412 +0000 UTC m=+109.441379905 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/53312298-624c-4f35-bdba-cbbf326775d2-metrics-certs") pod "network-metrics-daemon-6s89j" (UID: "53312298-624c-4f35-bdba-cbbf326775d2") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 09:44:20 crc kubenswrapper[4760]: I0226 09:44:20.575726 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6s89j" Feb 26 09:44:20 crc kubenswrapper[4760]: E0226 09:44:20.575953 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6s89j" podUID="53312298-624c-4f35-bdba-cbbf326775d2" Feb 26 09:44:20 crc kubenswrapper[4760]: I0226 09:44:20.576309 4760 scope.go:117] "RemoveContainer" containerID="6a42004a8b808c4c7fbf7c8f2872c56e8a3de2367477d08143604816366a17b5" Feb 26 09:44:20 crc kubenswrapper[4760]: E0226 09:44:20.576485 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 09:44:21 crc kubenswrapper[4760]: I0226 09:44:21.018295 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8" containerID="b9f97362dfe8450f62ecc04a00f7df66ca92c5c36e32c49114bacd3cafef5adc" exitCode=0 Feb 26 09:44:21 crc kubenswrapper[4760]: I0226 09:44:21.018367 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b8nmr" event={"ID":"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8","Type":"ContainerDied","Data":"b9f97362dfe8450f62ecc04a00f7df66ca92c5c36e32c49114bacd3cafef5adc"} Feb 26 09:44:21 crc kubenswrapper[4760]: I0226 09:44:21.575399 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:21 crc kubenswrapper[4760]: I0226 09:44:21.575428 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 09:44:21 crc kubenswrapper[4760]: E0226 09:44:21.575647 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 09:44:21 crc kubenswrapper[4760]: E0226 09:44:21.575753 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 09:44:21 crc kubenswrapper[4760]: I0226 09:44:21.575876 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 09:44:21 crc kubenswrapper[4760]: E0226 09:44:21.576117 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 09:44:22 crc kubenswrapper[4760]: I0226 09:44:22.027341 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8" containerID="a3e550653e23a33181944744efe14c8a52aae22ae93d36b5376f1b3a9d8a1dfd" exitCode=0 Feb 26 09:44:22 crc kubenswrapper[4760]: I0226 09:44:22.027436 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b8nmr" event={"ID":"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8","Type":"ContainerDied","Data":"a3e550653e23a33181944744efe14c8a52aae22ae93d36b5376f1b3a9d8a1dfd"} Feb 26 09:44:22 crc kubenswrapper[4760]: I0226 09:44:22.575446 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6s89j" Feb 26 09:44:22 crc kubenswrapper[4760]: E0226 09:44:22.575612 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6s89j" podUID="53312298-624c-4f35-bdba-cbbf326775d2" Feb 26 09:44:23 crc kubenswrapper[4760]: I0226 09:44:23.035196 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" event={"ID":"b32a82ca-1cad-4bd9-8a32-fc14618c9c8a","Type":"ContainerStarted","Data":"58cec4cbe4b33c347ed7aa8f4559f3a2ce0c1c44e898e324bfafdda1b54246bf"} Feb 26 09:44:23 crc kubenswrapper[4760]: I0226 09:44:23.035632 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:23 crc kubenswrapper[4760]: I0226 09:44:23.035667 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:23 crc kubenswrapper[4760]: I0226 09:44:23.035691 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:23 crc kubenswrapper[4760]: I0226 09:44:23.038664 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8" containerID="3b1f4ca1d1881dd1e1c3fa622365485cf35bfed9e94b45d0acc65ca5e8cb61de" exitCode=0 Feb 26 09:44:23 crc kubenswrapper[4760]: I0226 09:44:23.038699 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b8nmr" event={"ID":"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8","Type":"ContainerDied","Data":"3b1f4ca1d1881dd1e1c3fa622365485cf35bfed9e94b45d0acc65ca5e8cb61de"} Feb 26 09:44:23 crc kubenswrapper[4760]: I0226 09:44:23.069833 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:23 crc kubenswrapper[4760]: I0226 09:44:23.069965 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:23 crc kubenswrapper[4760]: I0226 09:44:23.084701 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" podStartSLOduration=50.084690646 podStartE2EDuration="50.084690646s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:23.061964779 +0000 UTC m=+108.195910292" watchObservedRunningTime="2026-02-26 09:44:23.084690646 +0000 UTC m=+108.218636139" Feb 26 09:44:23 crc kubenswrapper[4760]: I0226 09:44:23.576156 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 09:44:23 crc kubenswrapper[4760]: I0226 09:44:23.576185 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 09:44:23 crc kubenswrapper[4760]: I0226 09:44:23.576258 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:23 crc kubenswrapper[4760]: E0226 09:44:23.577163 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 09:44:23 crc kubenswrapper[4760]: E0226 09:44:23.577308 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 09:44:23 crc kubenswrapper[4760]: E0226 09:44:23.577391 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 09:44:24 crc kubenswrapper[4760]: I0226 09:44:24.047371 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b8nmr" event={"ID":"5dd0d0d7-3204-4109-b3b9-4a6d12dcc6d8","Type":"ContainerStarted","Data":"741fa3a4826449794bc22c872efc10f4c2078bc36dd10f78b277bb28fa3c02e3"} Feb 26 09:44:24 crc kubenswrapper[4760]: I0226 09:44:24.067084 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-b8nmr" podStartSLOduration=51.06705586 podStartE2EDuration="51.06705586s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:24.065967079 +0000 UTC m=+109.199912572" watchObservedRunningTime="2026-02-26 09:44:24.06705586 +0000 UTC m=+109.201001393" Feb 26 09:44:24 crc kubenswrapper[4760]: I0226 09:44:24.348932 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53312298-624c-4f35-bdba-cbbf326775d2-metrics-certs\") pod \"network-metrics-daemon-6s89j\" (UID: \"53312298-624c-4f35-bdba-cbbf326775d2\") " pod="openshift-multus/network-metrics-daemon-6s89j" Feb 26 09:44:24 crc kubenswrapper[4760]: E0226 09:44:24.349118 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 09:44:24 crc kubenswrapper[4760]: E0226 09:44:24.349223 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53312298-624c-4f35-bdba-cbbf326775d2-metrics-certs podName:53312298-624c-4f35-bdba-cbbf326775d2 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:32.349201914 +0000 UTC m=+117.483147487 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/53312298-624c-4f35-bdba-cbbf326775d2-metrics-certs") pod "network-metrics-daemon-6s89j" (UID: "53312298-624c-4f35-bdba-cbbf326775d2") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 09:44:24 crc kubenswrapper[4760]: I0226 09:44:24.575764 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6s89j" Feb 26 09:44:24 crc kubenswrapper[4760]: E0226 09:44:24.576134 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6s89j" podUID="53312298-624c-4f35-bdba-cbbf326775d2" Feb 26 09:44:24 crc kubenswrapper[4760]: I0226 09:44:24.840096 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-6s89j"] Feb 26 09:44:25 crc kubenswrapper[4760]: I0226 09:44:25.049993 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6s89j" Feb 26 09:44:25 crc kubenswrapper[4760]: E0226 09:44:25.050093 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6s89j" podUID="53312298-624c-4f35-bdba-cbbf326775d2" Feb 26 09:44:25 crc kubenswrapper[4760]: I0226 09:44:25.576274 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 09:44:25 crc kubenswrapper[4760]: I0226 09:44:25.576288 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:25 crc kubenswrapper[4760]: E0226 09:44:25.576406 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 09:44:25 crc kubenswrapper[4760]: I0226 09:44:25.576296 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 09:44:25 crc kubenswrapper[4760]: E0226 09:44:25.576501 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 09:44:25 crc kubenswrapper[4760]: E0226 09:44:25.576679 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 09:44:26 crc kubenswrapper[4760]: I0226 09:44:26.576471 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6s89j" Feb 26 09:44:26 crc kubenswrapper[4760]: E0226 09:44:26.577314 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-6s89j" podUID="53312298-624c-4f35-bdba-cbbf326775d2" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.226453 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.226732 4760 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.275340 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-m8s4c"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.275773 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.276119 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.276499 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-m8s4c" Feb 26 09:44:27 crc kubenswrapper[4760]: W0226 09:44:27.280501 4760 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-tls": failed to list *v1.Secret: secrets "machine-api-operator-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Feb 26 09:44:27 crc kubenswrapper[4760]: E0226 09:44:27.280550 4760 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 26 09:44:27 crc kubenswrapper[4760]: W0226 09:44:27.280664 4760 reflector.go:561] object-"openshift-oauth-apiserver"/"audit-1": failed to list *v1.ConfigMap: configmaps "audit-1" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Feb 26 09:44:27 crc kubenswrapper[4760]: E0226 09:44:27.280680 4760 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"audit-1\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 26 09:44:27 crc kubenswrapper[4760]: W0226 09:44:27.280718 4760 reflector.go:561] object-"openshift-oauth-apiserver"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Feb 26 09:44:27 crc kubenswrapper[4760]: E0226 09:44:27.280731 4760 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 26 09:44:27 crc kubenswrapper[4760]: W0226 09:44:27.280781 4760 reflector.go:561] object-"openshift-oauth-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Feb 26 09:44:27 crc kubenswrapper[4760]: E0226 09:44:27.280795 4760 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 26 09:44:27 crc kubenswrapper[4760]: W0226 09:44:27.280844 4760 reflector.go:561] object-"openshift-oauth-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Feb 26 09:44:27 crc kubenswrapper[4760]: E0226 09:44:27.280858 4760 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 26 09:44:27 crc kubenswrapper[4760]: W0226 09:44:27.280907 4760 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: configmaps "machine-api-operator-images" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Feb 26 09:44:27 crc kubenswrapper[4760]: E0226 09:44:27.280922 4760 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-api-operator-images\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 26 09:44:27 crc kubenswrapper[4760]: W0226 09:44:27.280986 4760 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: configmaps "etcd-serving-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Feb 26 09:44:27 crc kubenswrapper[4760]: E0226 09:44:27.281000 4760 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-serving-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 26 09:44:27 crc kubenswrapper[4760]: W0226 09:44:27.281032 4760 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7": failed to list *v1.Secret: secrets "machine-api-operator-dockercfg-mfbb7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Feb 26 09:44:27 crc kubenswrapper[4760]: E0226 09:44:27.281044 4760 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-mfbb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-dockercfg-mfbb7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 26 09:44:27 crc kubenswrapper[4760]: W0226 09:44:27.281078 4760 reflector.go:561] object-"openshift-machine-api"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Feb 26 09:44:27 crc kubenswrapper[4760]: E0226 09:44:27.281091 4760 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 26 09:44:27 crc kubenswrapper[4760]: W0226 09:44:27.281143 4760 reflector.go:561] object-"openshift-machine-api"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Feb 26 09:44:27 crc kubenswrapper[4760]: E0226 09:44:27.281155 4760 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 26 09:44:27 crc kubenswrapper[4760]: W0226 09:44:27.281190 4760 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Feb 26 09:44:27 crc kubenswrapper[4760]: E0226 09:44:27.281204 4760 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 26 09:44:27 crc kubenswrapper[4760]: W0226 09:44:27.281286 4760 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-client": failed to list *v1.Secret: secrets "etcd-client" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Feb 26 09:44:27 crc kubenswrapper[4760]: E0226 09:44:27.281312 4760 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-client\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 26 09:44:27 crc kubenswrapper[4760]: W0226 09:44:27.281489 4760 reflector.go:561] object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq": failed to list *v1.Secret: secrets "oauth-apiserver-sa-dockercfg-6r2bq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Feb 26 09:44:27 crc kubenswrapper[4760]: E0226 09:44:27.281539 4760 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-6r2bq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"oauth-apiserver-sa-dockercfg-6r2bq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 26 09:44:27 crc kubenswrapper[4760]: W0226 09:44:27.281634 4760 reflector.go:561] object-"openshift-oauth-apiserver"/"encryption-config-1": failed to list *v1.Secret: secrets "encryption-config-1" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Feb 26 09:44:27 crc kubenswrapper[4760]: E0226 09:44:27.281656 4760 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"encryption-config-1\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 26 09:44:27 crc kubenswrapper[4760]: W0226 09:44:27.281723 4760 reflector.go:561] object-"openshift-oauth-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-oauth-apiserver": no relationship found between node 'crc' and this object Feb 26 09:44:27 crc kubenswrapper[4760]: E0226 09:44:27.281749 4760 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-oauth-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.282731 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9lh8"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.283305 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9lh8" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.294538 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.294598 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.299479 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf2j2"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.299813 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.299899 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf2j2" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.299835 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-b2fw9"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.300472 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-b2fw9" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.300597 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2tqr5"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.301134 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.302193 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.302517 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4qdxn"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.302982 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-lhclv"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.303081 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.303284 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4qdxn" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.303328 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-lhclv" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.304793 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-g6gh7"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.306000 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-g6gh7" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.324168 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.324910 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.325450 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-6v588"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.326377 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6v588" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.326688 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z9dvk"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.329052 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z9dvk" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.329967 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.330012 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.330201 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.330223 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.330700 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.330901 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.330985 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.331099 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.331168 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.331217 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.331175 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.331346 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.331276 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.331099 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.331645 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.331726 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.331855 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.331960 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.332036 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.332201 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.332294 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.332437 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.332561 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.331486 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.332560 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.331540 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.332689 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.332224 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.331965 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.333103 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.331647 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.333216 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.333332 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.333420 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.332468 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.333776 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.334717 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.337614 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gq8x8"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.339033 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gq8x8" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.360126 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.360390 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.360614 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.362874 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-2sb8r"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.363556 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-79n6q"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.363848 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hczkw"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.364285 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.364509 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-79n6q" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.364619 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.364560 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2sb8r" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.364586 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.365657 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.365821 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.364666 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.365100 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.366420 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.366508 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.366514 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.366704 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.367042 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.367384 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.367542 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.370199 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.371105 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-cb5r8"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.371784 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-nm4ph"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.372309 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-nm4ph" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.372748 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.372970 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.373188 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cb5r8" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.373429 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.373541 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.375913 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-m8s4c"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.376410 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.376663 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.376839 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.377269 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.377315 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.377369 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.377889 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.378783 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.379103 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.379313 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.379447 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.379976 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.380294 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.380432 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.380543 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.380606 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.380145 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9lh8"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.382791 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mpttf"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.380153 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.380787 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.383562 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.380874 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.383847 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mpttf" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.380911 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/181dda13-0878-45ce-8585-e1799db10957-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-lhclv\" (UID: \"181dda13-0878-45ce-8585-e1799db10957\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lhclv" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.383752 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-zgghc"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.384110 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws99t\" (UniqueName: \"kubernetes.io/projected/54d8e12b-f9b5-4c44-857a-582a2d507728-kube-api-access-ws99t\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.384278 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c8bz\" (UniqueName: \"kubernetes.io/projected/9452a45a-41af-4942-9273-a8fa4671dd93-kube-api-access-7c8bz\") pod \"openshift-apiserver-operator-796bbdcf4f-lf2j2\" (UID: \"9452a45a-41af-4942-9273-a8fa4671dd93\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf2j2" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.385231 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/571e2ec3-7e3c-4157-aefd-a6d0004de830-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-x9lh8\" (UID: \"571e2ec3-7e3c-4157-aefd-a6d0004de830\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9lh8" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.385390 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/54d8e12b-f9b5-4c44-857a-582a2d507728-audit-policies\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.384820 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zgghc" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.385529 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/54d8e12b-f9b5-4c44-857a-582a2d507728-audit-dir\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.385566 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1d1c8d0d-900e-4dd0-a880-1c6889483328-images\") pod \"machine-api-operator-5694c8668f-m8s4c\" (UID: \"1d1c8d0d-900e-4dd0-a880-1c6889483328\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m8s4c" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.385705 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54d8e12b-f9b5-4c44-857a-582a2d507728-serving-cert\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.385724 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-455fs\" (UniqueName: \"kubernetes.io/projected/571e2ec3-7e3c-4157-aefd-a6d0004de830-kube-api-access-455fs\") pod \"cluster-image-registry-operator-dc59b4c8b-x9lh8\" (UID: \"571e2ec3-7e3c-4157-aefd-a6d0004de830\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9lh8" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.385861 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa1b34f5-88f0-49e2-be26-82e6e6ecf4e6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-z9dvk\" (UID: \"fa1b34f5-88f0-49e2-be26-82e6e6ecf4e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z9dvk" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.385888 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25rll\" (UniqueName: \"kubernetes.io/projected/fa1b34f5-88f0-49e2-be26-82e6e6ecf4e6-kube-api-access-25rll\") pod \"cluster-samples-operator-665b6dd947-z9dvk\" (UID: \"fa1b34f5-88f0-49e2-be26-82e6e6ecf4e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z9dvk" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.386041 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.386188 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/54d8e12b-f9b5-4c44-857a-582a2d507728-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.386222 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgss9\" (UniqueName: \"kubernetes.io/projected/9233a625-86b6-4160-a8b8-7db5a1fe7d23-kube-api-access-qgss9\") pod \"openshift-config-operator-7777fb866f-4qdxn\" (UID: \"9233a625-86b6-4160-a8b8-7db5a1fe7d23\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4qdxn" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.386365 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e2b4386d-728b-43e0-83e7-030a977d88dd-audit-dir\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.386537 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hdsd\" (UniqueName: \"kubernetes.io/projected/1d1c8d0d-900e-4dd0-a880-1c6889483328-kube-api-access-9hdsd\") pod \"machine-api-operator-5694c8668f-m8s4c\" (UID: \"1d1c8d0d-900e-4dd0-a880-1c6889483328\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m8s4c" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.386561 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aef80081-75af-41e5-a0bf-f6a7d0d384bf-config\") pod \"controller-manager-879f6c89f-b2fw9\" (UID: \"aef80081-75af-41e5-a0bf-f6a7d0d384bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2fw9" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.386625 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fwpn\" (UniqueName: \"kubernetes.io/projected/177eadcf-131c-445e-a714-ab16338b0b5e-kube-api-access-6fwpn\") pod \"openshift-controller-manager-operator-756b6f6bc6-gq8x8\" (UID: \"177eadcf-131c-445e-a714-ab16338b0b5e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gq8x8" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.386645 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.384834 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.386769 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9452a45a-41af-4942-9273-a8fa4671dd93-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lf2j2\" (UID: \"9452a45a-41af-4942-9273-a8fa4671dd93\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf2j2" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.386790 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.386814 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.387006 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzbvt\" (UniqueName: \"kubernetes.io/projected/dcef4e8d-f319-4f69-8795-3102aebecd9c-kube-api-access-gzbvt\") pod \"route-controller-manager-6576b87f9c-zhxnq\" (UID: \"dcef4e8d-f319-4f69-8795-3102aebecd9c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.386000 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.387248 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fswdc\" (UniqueName: \"kubernetes.io/projected/de95d7ed-3895-43a6-b422-caae1114b0ec-kube-api-access-fswdc\") pod \"downloads-7954f5f757-6v588\" (UID: \"de95d7ed-3895-43a6-b422-caae1114b0ec\") " pod="openshift-console/downloads-7954f5f757-6v588" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.387305 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9233a625-86b6-4160-a8b8-7db5a1fe7d23-serving-cert\") pod \"openshift-config-operator-7777fb866f-4qdxn\" (UID: \"9233a625-86b6-4160-a8b8-7db5a1fe7d23\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4qdxn" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.387334 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1c8d0d-900e-4dd0-a880-1c6889483328-config\") pod \"machine-api-operator-5694c8668f-m8s4c\" (UID: \"1d1c8d0d-900e-4dd0-a880-1c6889483328\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m8s4c" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.387369 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1d1c8d0d-900e-4dd0-a880-1c6889483328-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-m8s4c\" (UID: \"1d1c8d0d-900e-4dd0-a880-1c6889483328\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m8s4c" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.387436 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/181dda13-0878-45ce-8585-e1799db10957-serving-cert\") pod \"authentication-operator-69f744f599-lhclv\" (UID: \"181dda13-0878-45ce-8585-e1799db10957\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lhclv" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.387465 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef80081-75af-41e5-a0bf-f6a7d0d384bf-serving-cert\") pod \"controller-manager-879f6c89f-b2fw9\" (UID: \"aef80081-75af-41e5-a0bf-f6a7d0d384bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2fw9" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.387497 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/181dda13-0878-45ce-8585-e1799db10957-config\") pod \"authentication-operator-69f744f599-lhclv\" (UID: \"181dda13-0878-45ce-8585-e1799db10957\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lhclv" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.387522 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvffw\" (UniqueName: \"kubernetes.io/projected/aef80081-75af-41e5-a0bf-f6a7d0d384bf-kube-api-access-vvffw\") pod \"controller-manager-879f6c89f-b2fw9\" (UID: \"aef80081-75af-41e5-a0bf-f6a7d0d384bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2fw9" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.387546 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.387637 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.387681 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s5wx\" (UniqueName: \"kubernetes.io/projected/181dda13-0878-45ce-8585-e1799db10957-kube-api-access-5s5wx\") pod \"authentication-operator-69f744f599-lhclv\" (UID: \"181dda13-0878-45ce-8585-e1799db10957\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lhclv" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.387715 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kxzm\" (UniqueName: \"kubernetes.io/projected/e2b4386d-728b-43e0-83e7-030a977d88dd-kube-api-access-2kxzm\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.387747 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/181dda13-0878-45ce-8585-e1799db10957-service-ca-bundle\") pod \"authentication-operator-69f744f599-lhclv\" (UID: \"181dda13-0878-45ce-8585-e1799db10957\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lhclv" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.387771 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.387860 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/571e2ec3-7e3c-4157-aefd-a6d0004de830-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-x9lh8\" (UID: \"571e2ec3-7e3c-4157-aefd-a6d0004de830\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9lh8" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.387904 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/571e2ec3-7e3c-4157-aefd-a6d0004de830-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-x9lh8\" (UID: \"571e2ec3-7e3c-4157-aefd-a6d0004de830\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9lh8" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.387963 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8676521e-a09e-457c-bd7d-5acd1cc86b3a-serving-cert\") pod \"console-operator-58897d9998-g6gh7\" (UID: \"8676521e-a09e-457c-bd7d-5acd1cc86b3a\") " pod="openshift-console-operator/console-operator-58897d9998-g6gh7" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.388009 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54d8e12b-f9b5-4c44-857a-582a2d507728-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.388036 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8676521e-a09e-457c-bd7d-5acd1cc86b3a-config\") pod \"console-operator-58897d9998-g6gh7\" (UID: \"8676521e-a09e-457c-bd7d-5acd1cc86b3a\") " pod="openshift-console-operator/console-operator-58897d9998-g6gh7" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.388073 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dcef4e8d-f319-4f69-8795-3102aebecd9c-client-ca\") pod \"route-controller-manager-6576b87f9c-zhxnq\" (UID: \"dcef4e8d-f319-4f69-8795-3102aebecd9c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.388093 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e2b4386d-728b-43e0-83e7-030a977d88dd-audit-policies\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.388125 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.388160 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/54d8e12b-f9b5-4c44-857a-582a2d507728-encryption-config\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.388180 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcef4e8d-f319-4f69-8795-3102aebecd9c-config\") pod \"route-controller-manager-6576b87f9c-zhxnq\" (UID: \"dcef4e8d-f319-4f69-8795-3102aebecd9c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.388207 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.388246 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/177eadcf-131c-445e-a714-ab16338b0b5e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gq8x8\" (UID: \"177eadcf-131c-445e-a714-ab16338b0b5e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gq8x8" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.388270 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcef4e8d-f319-4f69-8795-3102aebecd9c-serving-cert\") pod \"route-controller-manager-6576b87f9c-zhxnq\" (UID: \"dcef4e8d-f319-4f69-8795-3102aebecd9c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.388293 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/54d8e12b-f9b5-4c44-857a-582a2d507728-etcd-client\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.388311 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aef80081-75af-41e5-a0bf-f6a7d0d384bf-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-b2fw9\" (UID: \"aef80081-75af-41e5-a0bf-f6a7d0d384bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2fw9" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.388338 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9452a45a-41af-4942-9273-a8fa4671dd93-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lf2j2\" (UID: \"9452a45a-41af-4942-9273-a8fa4671dd93\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf2j2" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.388357 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw4dl\" (UniqueName: \"kubernetes.io/projected/8676521e-a09e-457c-bd7d-5acd1cc86b3a-kube-api-access-bw4dl\") pod \"console-operator-58897d9998-g6gh7\" (UID: \"8676521e-a09e-457c-bd7d-5acd1cc86b3a\") " pod="openshift-console-operator/console-operator-58897d9998-g6gh7" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.388380 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aef80081-75af-41e5-a0bf-f6a7d0d384bf-client-ca\") pod \"controller-manager-879f6c89f-b2fw9\" (UID: \"aef80081-75af-41e5-a0bf-f6a7d0d384bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2fw9" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.388403 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.388437 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8676521e-a09e-457c-bd7d-5acd1cc86b3a-trusted-ca\") pod \"console-operator-58897d9998-g6gh7\" (UID: \"8676521e-a09e-457c-bd7d-5acd1cc86b3a\") " pod="openshift-console-operator/console-operator-58897d9998-g6gh7" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.388457 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9233a625-86b6-4160-a8b8-7db5a1fe7d23-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4qdxn\" (UID: \"9233a625-86b6-4160-a8b8-7db5a1fe7d23\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4qdxn" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.388479 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/177eadcf-131c-445e-a714-ab16338b0b5e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gq8x8\" (UID: \"177eadcf-131c-445e-a714-ab16338b0b5e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gq8x8" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.388501 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.404959 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.408446 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.412627 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9fjgn"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.419195 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.421299 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mkg6j"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.423378 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.423722 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9zf8"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.423955 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-mkg6j" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.424301 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9zf8" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.426063 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c95n7"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.426873 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.427462 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c95n7" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.447971 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tft7j"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.448439 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tft7j" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.451819 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.453168 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-dv5m7"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.454511 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5flk"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.454835 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf2j2"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.454932 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5flk" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.456872 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dv5m7" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.457385 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmktk"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.457958 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmktk" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.460736 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbsd5"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.461230 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-5xjtp"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.461739 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9s5xj"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.462286 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9s5xj" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.462881 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbsd5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.463091 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5xjtp" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.466942 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dlxqc"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.467498 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-w4g8h"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.467836 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zr9kg"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.468175 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zr9kg" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.468389 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dlxqc" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.468526 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-w4g8h" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.471253 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnr4g"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.472007 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-l4drh"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.472399 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-44s9q"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.472726 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnr4g" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.472880 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-44s9q" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.472993 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l4drh" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.477417 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534970-r2bbh"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.478229 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-r2bbh" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.478903 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9fdhq"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.479824 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9fdhq" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.481759 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-kmqvg"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.482673 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2tqr5"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.482812 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kmqvg" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.482863 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.484298 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-zgghc"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.485343 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-b2fw9"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.487647 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mkg6j"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.488147 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6v588"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489000 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489025 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcef4e8d-f319-4f69-8795-3102aebecd9c-serving-cert\") pod \"route-controller-manager-6576b87f9c-zhxnq\" (UID: \"dcef4e8d-f319-4f69-8795-3102aebecd9c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489042 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/177eadcf-131c-445e-a714-ab16338b0b5e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gq8x8\" (UID: \"177eadcf-131c-445e-a714-ab16338b0b5e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gq8x8" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489074 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/54d8e12b-f9b5-4c44-857a-582a2d507728-etcd-client\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489089 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aef80081-75af-41e5-a0bf-f6a7d0d384bf-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-b2fw9\" (UID: \"aef80081-75af-41e5-a0bf-f6a7d0d384bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2fw9" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489104 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw4dl\" (UniqueName: \"kubernetes.io/projected/8676521e-a09e-457c-bd7d-5acd1cc86b3a-kube-api-access-bw4dl\") pod \"console-operator-58897d9998-g6gh7\" (UID: \"8676521e-a09e-457c-bd7d-5acd1cc86b3a\") " pod="openshift-console-operator/console-operator-58897d9998-g6gh7" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489121 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9452a45a-41af-4942-9273-a8fa4671dd93-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lf2j2\" (UID: \"9452a45a-41af-4942-9273-a8fa4671dd93\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf2j2" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489155 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489172 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aef80081-75af-41e5-a0bf-f6a7d0d384bf-client-ca\") pod \"controller-manager-879f6c89f-b2fw9\" (UID: \"aef80081-75af-41e5-a0bf-f6a7d0d384bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2fw9" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489187 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/177eadcf-131c-445e-a714-ab16338b0b5e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gq8x8\" (UID: \"177eadcf-131c-445e-a714-ab16338b0b5e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gq8x8" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489201 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489240 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8676521e-a09e-457c-bd7d-5acd1cc86b3a-trusted-ca\") pod \"console-operator-58897d9998-g6gh7\" (UID: \"8676521e-a09e-457c-bd7d-5acd1cc86b3a\") " pod="openshift-console-operator/console-operator-58897d9998-g6gh7" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489318 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9233a625-86b6-4160-a8b8-7db5a1fe7d23-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4qdxn\" (UID: \"9233a625-86b6-4160-a8b8-7db5a1fe7d23\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4qdxn" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489342 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af3d7b95-7fb4-4343-a019-1f30b1c65b28-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-d9zf8\" (UID: \"af3d7b95-7fb4-4343-a019-1f30b1c65b28\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9zf8" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489358 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/181dda13-0878-45ce-8585-e1799db10957-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-lhclv\" (UID: \"181dda13-0878-45ce-8585-e1799db10957\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lhclv" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489393 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws99t\" (UniqueName: \"kubernetes.io/projected/54d8e12b-f9b5-4c44-857a-582a2d507728-kube-api-access-ws99t\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489411 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c8bz\" (UniqueName: \"kubernetes.io/projected/9452a45a-41af-4942-9273-a8fa4671dd93-kube-api-access-7c8bz\") pod \"openshift-apiserver-operator-796bbdcf4f-lf2j2\" (UID: \"9452a45a-41af-4942-9273-a8fa4671dd93\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf2j2" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489428 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/571e2ec3-7e3c-4157-aefd-a6d0004de830-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-x9lh8\" (UID: \"571e2ec3-7e3c-4157-aefd-a6d0004de830\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9lh8" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489460 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/54d8e12b-f9b5-4c44-857a-582a2d507728-audit-policies\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489477 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/54d8e12b-f9b5-4c44-857a-582a2d507728-audit-dir\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489493 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1d1c8d0d-900e-4dd0-a880-1c6889483328-images\") pod \"machine-api-operator-5694c8668f-m8s4c\" (UID: \"1d1c8d0d-900e-4dd0-a880-1c6889483328\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m8s4c" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489509 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-455fs\" (UniqueName: \"kubernetes.io/projected/571e2ec3-7e3c-4157-aefd-a6d0004de830-kube-api-access-455fs\") pod \"cluster-image-registry-operator-dc59b4c8b-x9lh8\" (UID: \"571e2ec3-7e3c-4157-aefd-a6d0004de830\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9lh8" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489536 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54d8e12b-f9b5-4c44-857a-582a2d507728-serving-cert\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489552 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa1b34f5-88f0-49e2-be26-82e6e6ecf4e6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-z9dvk\" (UID: \"fa1b34f5-88f0-49e2-be26-82e6e6ecf4e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z9dvk" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489567 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25rll\" (UniqueName: \"kubernetes.io/projected/fa1b34f5-88f0-49e2-be26-82e6e6ecf4e6-kube-api-access-25rll\") pod \"cluster-samples-operator-665b6dd947-z9dvk\" (UID: \"fa1b34f5-88f0-49e2-be26-82e6e6ecf4e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z9dvk" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489610 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489625 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgss9\" (UniqueName: \"kubernetes.io/projected/9233a625-86b6-4160-a8b8-7db5a1fe7d23-kube-api-access-qgss9\") pod \"openshift-config-operator-7777fb866f-4qdxn\" (UID: \"9233a625-86b6-4160-a8b8-7db5a1fe7d23\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4qdxn" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489639 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e2b4386d-728b-43e0-83e7-030a977d88dd-audit-dir\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489674 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af3d7b95-7fb4-4343-a019-1f30b1c65b28-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-d9zf8\" (UID: \"af3d7b95-7fb4-4343-a019-1f30b1c65b28\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9zf8" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489691 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/54d8e12b-f9b5-4c44-857a-582a2d507728-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489706 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hdsd\" (UniqueName: \"kubernetes.io/projected/1d1c8d0d-900e-4dd0-a880-1c6889483328-kube-api-access-9hdsd\") pod \"machine-api-operator-5694c8668f-m8s4c\" (UID: \"1d1c8d0d-900e-4dd0-a880-1c6889483328\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m8s4c" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489722 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aef80081-75af-41e5-a0bf-f6a7d0d384bf-config\") pod \"controller-manager-879f6c89f-b2fw9\" (UID: \"aef80081-75af-41e5-a0bf-f6a7d0d384bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2fw9" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489753 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489771 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fwpn\" (UniqueName: \"kubernetes.io/projected/177eadcf-131c-445e-a714-ab16338b0b5e-kube-api-access-6fwpn\") pod \"openshift-controller-manager-operator-756b6f6bc6-gq8x8\" (UID: \"177eadcf-131c-445e-a714-ab16338b0b5e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gq8x8" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489786 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9452a45a-41af-4942-9273-a8fa4671dd93-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lf2j2\" (UID: \"9452a45a-41af-4942-9273-a8fa4671dd93\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf2j2" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489807 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489840 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489856 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzbvt\" (UniqueName: \"kubernetes.io/projected/dcef4e8d-f319-4f69-8795-3102aebecd9c-kube-api-access-gzbvt\") pod \"route-controller-manager-6576b87f9c-zhxnq\" (UID: \"dcef4e8d-f319-4f69-8795-3102aebecd9c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489871 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1d1c8d0d-900e-4dd0-a880-1c6889483328-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-m8s4c\" (UID: \"1d1c8d0d-900e-4dd0-a880-1c6889483328\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m8s4c" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489885 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fswdc\" (UniqueName: \"kubernetes.io/projected/de95d7ed-3895-43a6-b422-caae1114b0ec-kube-api-access-fswdc\") pod \"downloads-7954f5f757-6v588\" (UID: \"de95d7ed-3895-43a6-b422-caae1114b0ec\") " pod="openshift-console/downloads-7954f5f757-6v588" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489916 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9233a625-86b6-4160-a8b8-7db5a1fe7d23-serving-cert\") pod \"openshift-config-operator-7777fb866f-4qdxn\" (UID: \"9233a625-86b6-4160-a8b8-7db5a1fe7d23\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4qdxn" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489932 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1c8d0d-900e-4dd0-a880-1c6889483328-config\") pod \"machine-api-operator-5694c8668f-m8s4c\" (UID: \"1d1c8d0d-900e-4dd0-a880-1c6889483328\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m8s4c" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489948 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/181dda13-0878-45ce-8585-e1799db10957-serving-cert\") pod \"authentication-operator-69f744f599-lhclv\" (UID: \"181dda13-0878-45ce-8585-e1799db10957\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lhclv" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489963 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef80081-75af-41e5-a0bf-f6a7d0d384bf-serving-cert\") pod \"controller-manager-879f6c89f-b2fw9\" (UID: \"aef80081-75af-41e5-a0bf-f6a7d0d384bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2fw9" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.489994 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af3d7b95-7fb4-4343-a019-1f30b1c65b28-config\") pod \"kube-controller-manager-operator-78b949d7b-d9zf8\" (UID: \"af3d7b95-7fb4-4343-a019-1f30b1c65b28\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9zf8" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.490007 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/177eadcf-131c-445e-a714-ab16338b0b5e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gq8x8\" (UID: \"177eadcf-131c-445e-a714-ab16338b0b5e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gq8x8" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.490011 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/181dda13-0878-45ce-8585-e1799db10957-config\") pod \"authentication-operator-69f744f599-lhclv\" (UID: \"181dda13-0878-45ce-8585-e1799db10957\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lhclv" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.490094 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvffw\" (UniqueName: \"kubernetes.io/projected/aef80081-75af-41e5-a0bf-f6a7d0d384bf-kube-api-access-vvffw\") pod \"controller-manager-879f6c89f-b2fw9\" (UID: \"aef80081-75af-41e5-a0bf-f6a7d0d384bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2fw9" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.490125 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.490147 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.490186 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s5wx\" (UniqueName: \"kubernetes.io/projected/181dda13-0878-45ce-8585-e1799db10957-kube-api-access-5s5wx\") pod \"authentication-operator-69f744f599-lhclv\" (UID: \"181dda13-0878-45ce-8585-e1799db10957\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lhclv" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.490212 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kxzm\" (UniqueName: \"kubernetes.io/projected/e2b4386d-728b-43e0-83e7-030a977d88dd-kube-api-access-2kxzm\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.490236 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/181dda13-0878-45ce-8585-e1799db10957-service-ca-bundle\") pod \"authentication-operator-69f744f599-lhclv\" (UID: \"181dda13-0878-45ce-8585-e1799db10957\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lhclv" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.490258 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.490284 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/571e2ec3-7e3c-4157-aefd-a6d0004de830-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-x9lh8\" (UID: \"571e2ec3-7e3c-4157-aefd-a6d0004de830\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9lh8" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.490307 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8676521e-a09e-457c-bd7d-5acd1cc86b3a-serving-cert\") pod \"console-operator-58897d9998-g6gh7\" (UID: \"8676521e-a09e-457c-bd7d-5acd1cc86b3a\") " pod="openshift-console-operator/console-operator-58897d9998-g6gh7" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.490332 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/571e2ec3-7e3c-4157-aefd-a6d0004de830-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-x9lh8\" (UID: \"571e2ec3-7e3c-4157-aefd-a6d0004de830\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9lh8" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.490370 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8676521e-a09e-457c-bd7d-5acd1cc86b3a-config\") pod \"console-operator-58897d9998-g6gh7\" (UID: \"8676521e-a09e-457c-bd7d-5acd1cc86b3a\") " pod="openshift-console-operator/console-operator-58897d9998-g6gh7" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.490390 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gq8x8"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.490403 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54d8e12b-f9b5-4c44-857a-582a2d507728-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.490440 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dcef4e8d-f319-4f69-8795-3102aebecd9c-client-ca\") pod \"route-controller-manager-6576b87f9c-zhxnq\" (UID: \"dcef4e8d-f319-4f69-8795-3102aebecd9c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.490469 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/54d8e12b-f9b5-4c44-857a-582a2d507728-encryption-config\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.490475 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9452a45a-41af-4942-9273-a8fa4671dd93-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lf2j2\" (UID: \"9452a45a-41af-4942-9273-a8fa4671dd93\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf2j2" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.490482 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e2b4386d-728b-43e0-83e7-030a977d88dd-audit-dir\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.490537 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcef4e8d-f319-4f69-8795-3102aebecd9c-config\") pod \"route-controller-manager-6576b87f9c-zhxnq\" (UID: \"dcef4e8d-f319-4f69-8795-3102aebecd9c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.490564 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e2b4386d-728b-43e0-83e7-030a977d88dd-audit-policies\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.490633 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.490634 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aef80081-75af-41e5-a0bf-f6a7d0d384bf-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-b2fw9\" (UID: \"aef80081-75af-41e5-a0bf-f6a7d0d384bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2fw9" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.491466 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmktk"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.492485 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dcef4e8d-f319-4f69-8795-3102aebecd9c-client-ca\") pod \"route-controller-manager-6576b87f9c-zhxnq\" (UID: \"dcef4e8d-f319-4f69-8795-3102aebecd9c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.492818 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aef80081-75af-41e5-a0bf-f6a7d0d384bf-config\") pod \"controller-manager-879f6c89f-b2fw9\" (UID: \"aef80081-75af-41e5-a0bf-f6a7d0d384bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2fw9" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.492982 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aef80081-75af-41e5-a0bf-f6a7d0d384bf-client-ca\") pod \"controller-manager-879f6c89f-b2fw9\" (UID: \"aef80081-75af-41e5-a0bf-f6a7d0d384bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2fw9" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.493123 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9233a625-86b6-4160-a8b8-7db5a1fe7d23-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4qdxn\" (UID: \"9233a625-86b6-4160-a8b8-7db5a1fe7d23\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4qdxn" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.493153 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-g6gh7"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.493215 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.493421 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.493479 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/181dda13-0878-45ce-8585-e1799db10957-config\") pod \"authentication-operator-69f744f599-lhclv\" (UID: \"181dda13-0878-45ce-8585-e1799db10957\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lhclv" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.498886 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8676521e-a09e-457c-bd7d-5acd1cc86b3a-config\") pod \"console-operator-58897d9998-g6gh7\" (UID: \"8676521e-a09e-457c-bd7d-5acd1cc86b3a\") " pod="openshift-console-operator/console-operator-58897d9998-g6gh7" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.499149 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/571e2ec3-7e3c-4157-aefd-a6d0004de830-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-x9lh8\" (UID: \"571e2ec3-7e3c-4157-aefd-a6d0004de830\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9lh8" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.499876 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/54d8e12b-f9b5-4c44-857a-582a2d507728-audit-dir\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.499990 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/181dda13-0878-45ce-8585-e1799db10957-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-lhclv\" (UID: \"181dda13-0878-45ce-8585-e1799db10957\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lhclv" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.500137 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e2b4386d-728b-43e0-83e7-030a977d88dd-audit-policies\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.500233 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/181dda13-0878-45ce-8585-e1799db10957-service-ca-bundle\") pod \"authentication-operator-69f744f599-lhclv\" (UID: \"181dda13-0878-45ce-8585-e1799db10957\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lhclv" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.500454 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.500629 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcef4e8d-f319-4f69-8795-3102aebecd9c-config\") pod \"route-controller-manager-6576b87f9c-zhxnq\" (UID: \"dcef4e8d-f319-4f69-8795-3102aebecd9c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.501138 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/181dda13-0878-45ce-8585-e1799db10957-serving-cert\") pod \"authentication-operator-69f744f599-lhclv\" (UID: \"181dda13-0878-45ce-8585-e1799db10957\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lhclv" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.501458 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcef4e8d-f319-4f69-8795-3102aebecd9c-serving-cert\") pod \"route-controller-manager-6576b87f9c-zhxnq\" (UID: \"dcef4e8d-f319-4f69-8795-3102aebecd9c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.501665 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.502673 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.502683 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.502923 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8676521e-a09e-457c-bd7d-5acd1cc86b3a-trusted-ca\") pod \"console-operator-58897d9998-g6gh7\" (UID: \"8676521e-a09e-457c-bd7d-5acd1cc86b3a\") " pod="openshift-console-operator/console-operator-58897d9998-g6gh7" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.502957 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.503076 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.504509 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9452a45a-41af-4942-9273-a8fa4671dd93-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lf2j2\" (UID: \"9452a45a-41af-4942-9273-a8fa4671dd93\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf2j2" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.507591 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.508081 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4qdxn"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.509075 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.509845 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5flk"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.510044 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef80081-75af-41e5-a0bf-f6a7d0d384bf-serving-cert\") pod \"controller-manager-879f6c89f-b2fw9\" (UID: \"aef80081-75af-41e5-a0bf-f6a7d0d384bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2fw9" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.510390 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/177eadcf-131c-445e-a714-ab16338b0b5e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gq8x8\" (UID: \"177eadcf-131c-445e-a714-ab16338b0b5e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gq8x8" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.511339 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.511535 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-79n6q"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.512372 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.514469 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8676521e-a09e-457c-bd7d-5acd1cc86b3a-serving-cert\") pod \"console-operator-58897d9998-g6gh7\" (UID: \"8676521e-a09e-457c-bd7d-5acd1cc86b3a\") " pod="openshift-console-operator/console-operator-58897d9998-g6gh7" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.515743 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa1b34f5-88f0-49e2-be26-82e6e6ecf4e6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-z9dvk\" (UID: \"fa1b34f5-88f0-49e2-be26-82e6e6ecf4e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z9dvk" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.517286 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.519169 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.519480 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9233a625-86b6-4160-a8b8-7db5a1fe7d23-serving-cert\") pod \"openshift-config-operator-7777fb866f-4qdxn\" (UID: \"9233a625-86b6-4160-a8b8-7db5a1fe7d23\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4qdxn" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.519726 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z9dvk"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.521197 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/571e2ec3-7e3c-4157-aefd-a6d0004de830-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-x9lh8\" (UID: \"571e2ec3-7e3c-4157-aefd-a6d0004de830\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9lh8" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.522592 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hczkw"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.524143 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c95n7"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.525767 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tft7j"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.527351 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mpttf"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.529085 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbsd5"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.530911 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9zf8"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.532452 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9fjgn"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.534112 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-nm4ph"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.535749 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-lhclv"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.537274 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9fdhq"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.538673 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.539103 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.540696 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-5xjtp"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.542195 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9s5xj"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.543681 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zr9kg"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.545310 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-q4p5w"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.546715 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-q4p5w" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.547394 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-hczgn"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.548202 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hczgn" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.549152 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534970-r2bbh"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.550891 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnr4g"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.552220 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dlxqc"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.553478 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-44s9q"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.554736 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-l4drh"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.556032 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kmqvg"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.557322 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-cb5r8"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.558516 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.558671 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-q4p5w"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.561250 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hczgn"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.562492 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-dpdz4"] Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.563147 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.575308 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.575327 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.575341 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.585585 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.591105 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af3d7b95-7fb4-4343-a019-1f30b1c65b28-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-d9zf8\" (UID: \"af3d7b95-7fb4-4343-a019-1f30b1c65b28\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9zf8" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.591217 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af3d7b95-7fb4-4343-a019-1f30b1c65b28-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-d9zf8\" (UID: \"af3d7b95-7fb4-4343-a019-1f30b1c65b28\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9zf8" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.591301 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af3d7b95-7fb4-4343-a019-1f30b1c65b28-config\") pod \"kube-controller-manager-operator-78b949d7b-d9zf8\" (UID: \"af3d7b95-7fb4-4343-a019-1f30b1c65b28\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9zf8" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.599375 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.618561 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.638874 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.658984 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.678762 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.699381 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.718235 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.745562 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.758475 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.778300 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.798891 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.818943 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.838506 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.857996 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.879128 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.899496 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.919807 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.939427 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.960219 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.978890 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.987728 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af3d7b95-7fb4-4343-a019-1f30b1c65b28-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-d9zf8\" (UID: \"af3d7b95-7fb4-4343-a019-1f30b1c65b28\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9zf8" Feb 26 09:44:27 crc kubenswrapper[4760]: I0226 09:44:27.998859 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.002726 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af3d7b95-7fb4-4343-a019-1f30b1c65b28-config\") pod \"kube-controller-manager-operator-78b949d7b-d9zf8\" (UID: \"af3d7b95-7fb4-4343-a019-1f30b1c65b28\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9zf8" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.018874 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.039208 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.059647 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.078923 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.098430 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.117873 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.139212 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.158465 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.179993 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.198490 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.220025 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.239712 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.259870 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.279889 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.299261 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.320284 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.340292 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.359260 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.378554 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.399836 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.418718 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.439125 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.458557 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.477523 4760 request.go:700] Waited for 1.014967383s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&limit=500&resourceVersion=0 Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.478818 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 26 09:44:28 crc kubenswrapper[4760]: E0226 09:44:28.489824 4760 secret.go:188] Couldn't get secret openshift-oauth-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Feb 26 09:44:28 crc kubenswrapper[4760]: E0226 09:44:28.489904 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54d8e12b-f9b5-4c44-857a-582a2d507728-etcd-client podName:54d8e12b-f9b5-4c44-857a-582a2d507728 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:28.989881224 +0000 UTC m=+114.123826727 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/54d8e12b-f9b5-4c44-857a-582a2d507728-etcd-client") pod "apiserver-7bbb656c7d-njc94" (UID: "54d8e12b-f9b5-4c44-857a-582a2d507728") : failed to sync secret cache: timed out waiting for the condition Feb 26 09:44:28 crc kubenswrapper[4760]: E0226 09:44:28.491154 4760 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Feb 26 09:44:28 crc kubenswrapper[4760]: E0226 09:44:28.491202 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/54d8e12b-f9b5-4c44-857a-582a2d507728-etcd-serving-ca podName:54d8e12b-f9b5-4c44-857a-582a2d507728 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:28.991191631 +0000 UTC m=+114.125137124 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/54d8e12b-f9b5-4c44-857a-582a2d507728-etcd-serving-ca") pod "apiserver-7bbb656c7d-njc94" (UID: "54d8e12b-f9b5-4c44-857a-582a2d507728") : failed to sync configmap cache: timed out waiting for the condition Feb 26 09:44:28 crc kubenswrapper[4760]: E0226 09:44:28.492289 4760 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 26 09:44:28 crc kubenswrapper[4760]: E0226 09:44:28.492338 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/54d8e12b-f9b5-4c44-857a-582a2d507728-trusted-ca-bundle podName:54d8e12b-f9b5-4c44-857a-582a2d507728 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:28.992325244 +0000 UTC m=+114.126270747 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/54d8e12b-f9b5-4c44-857a-582a2d507728-trusted-ca-bundle") pod "apiserver-7bbb656c7d-njc94" (UID: "54d8e12b-f9b5-4c44-857a-582a2d507728") : failed to sync configmap cache: timed out waiting for the condition Feb 26 09:44:28 crc kubenswrapper[4760]: E0226 09:44:28.493462 4760 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 26 09:44:28 crc kubenswrapper[4760]: E0226 09:44:28.493597 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d1c8d0d-900e-4dd0-a880-1c6889483328-config podName:1d1c8d0d-900e-4dd0-a880-1c6889483328 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:28.993541238 +0000 UTC m=+114.127486781 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1d1c8d0d-900e-4dd0-a880-1c6889483328-config") pod "machine-api-operator-5694c8668f-m8s4c" (UID: "1d1c8d0d-900e-4dd0-a880-1c6889483328") : failed to sync configmap cache: timed out waiting for the condition Feb 26 09:44:28 crc kubenswrapper[4760]: E0226 09:44:28.494517 4760 secret.go:188] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 26 09:44:28 crc kubenswrapper[4760]: E0226 09:44:28.494543 4760 secret.go:188] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Feb 26 09:44:28 crc kubenswrapper[4760]: E0226 09:44:28.494554 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d1c8d0d-900e-4dd0-a880-1c6889483328-machine-api-operator-tls podName:1d1c8d0d-900e-4dd0-a880-1c6889483328 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:28.994546767 +0000 UTC m=+114.128492260 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/1d1c8d0d-900e-4dd0-a880-1c6889483328-machine-api-operator-tls") pod "machine-api-operator-5694c8668f-m8s4c" (UID: "1d1c8d0d-900e-4dd0-a880-1c6889483328") : failed to sync secret cache: timed out waiting for the condition Feb 26 09:44:28 crc kubenswrapper[4760]: E0226 09:44:28.494599 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54d8e12b-f9b5-4c44-857a-582a2d507728-encryption-config podName:54d8e12b-f9b5-4c44-857a-582a2d507728 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:28.994587358 +0000 UTC m=+114.128532871 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/54d8e12b-f9b5-4c44-857a-582a2d507728-encryption-config") pod "apiserver-7bbb656c7d-njc94" (UID: "54d8e12b-f9b5-4c44-857a-582a2d507728") : failed to sync secret cache: timed out waiting for the condition Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.497548 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 26 09:44:28 crc kubenswrapper[4760]: E0226 09:44:28.500097 4760 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Feb 26 09:44:28 crc kubenswrapper[4760]: E0226 09:44:28.500138 4760 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 26 09:44:28 crc kubenswrapper[4760]: E0226 09:44:28.500184 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d1c8d0d-900e-4dd0-a880-1c6889483328-images podName:1d1c8d0d-900e-4dd0-a880-1c6889483328 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:29.000170557 +0000 UTC m=+114.134116100 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/1d1c8d0d-900e-4dd0-a880-1c6889483328-images") pod "machine-api-operator-5694c8668f-m8s4c" (UID: "1d1c8d0d-900e-4dd0-a880-1c6889483328") : failed to sync configmap cache: timed out waiting for the condition Feb 26 09:44:28 crc kubenswrapper[4760]: E0226 09:44:28.500208 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/54d8e12b-f9b5-4c44-857a-582a2d507728-audit-policies podName:54d8e12b-f9b5-4c44-857a-582a2d507728 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:29.000198728 +0000 UTC m=+114.134144311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/54d8e12b-f9b5-4c44-857a-582a2d507728-audit-policies") pod "apiserver-7bbb656c7d-njc94" (UID: "54d8e12b-f9b5-4c44-857a-582a2d507728") : failed to sync configmap cache: timed out waiting for the condition Feb 26 09:44:28 crc kubenswrapper[4760]: E0226 09:44:28.502295 4760 secret.go:188] Couldn't get secret openshift-oauth-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 26 09:44:28 crc kubenswrapper[4760]: E0226 09:44:28.502408 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54d8e12b-f9b5-4c44-857a-582a2d507728-serving-cert podName:54d8e12b-f9b5-4c44-857a-582a2d507728 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:29.00239427 +0000 UTC m=+114.136339833 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/54d8e12b-f9b5-4c44-857a-582a2d507728-serving-cert") pod "apiserver-7bbb656c7d-njc94" (UID: "54d8e12b-f9b5-4c44-857a-582a2d507728") : failed to sync secret cache: timed out waiting for the condition Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.518209 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.544122 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.558463 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.575383 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6s89j" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.584485 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.600511 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.619521 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.639125 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.659113 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.698269 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.719405 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.740079 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.759801 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.778845 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.798806 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.819101 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.839427 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.865614 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.878544 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.899658 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.919069 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.940098 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.958748 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.978998 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 26 09:44:28 crc kubenswrapper[4760]: I0226 09:44:28.999886 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.007982 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1d1c8d0d-900e-4dd0-a880-1c6889483328-images\") pod \"machine-api-operator-5694c8668f-m8s4c\" (UID: \"1d1c8d0d-900e-4dd0-a880-1c6889483328\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m8s4c" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.008024 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/54d8e12b-f9b5-4c44-857a-582a2d507728-audit-policies\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.008050 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54d8e12b-f9b5-4c44-857a-582a2d507728-serving-cert\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.008088 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/54d8e12b-f9b5-4c44-857a-582a2d507728-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.008150 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1c8d0d-900e-4dd0-a880-1c6889483328-config\") pod \"machine-api-operator-5694c8668f-m8s4c\" (UID: \"1d1c8d0d-900e-4dd0-a880-1c6889483328\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m8s4c" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.008176 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1d1c8d0d-900e-4dd0-a880-1c6889483328-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-m8s4c\" (UID: \"1d1c8d0d-900e-4dd0-a880-1c6889483328\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m8s4c" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.008286 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54d8e12b-f9b5-4c44-857a-582a2d507728-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.008339 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/54d8e12b-f9b5-4c44-857a-582a2d507728-encryption-config\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.008373 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/54d8e12b-f9b5-4c44-857a-582a2d507728-etcd-client\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.018898 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.039460 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.060675 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.079895 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.100442 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.119357 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.158673 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.179648 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.198755 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.219281 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.238961 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.286263 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw4dl\" (UniqueName: \"kubernetes.io/projected/8676521e-a09e-457c-bd7d-5acd1cc86b3a-kube-api-access-bw4dl\") pod \"console-operator-58897d9998-g6gh7\" (UID: \"8676521e-a09e-457c-bd7d-5acd1cc86b3a\") " pod="openshift-console-operator/console-operator-58897d9998-g6gh7" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.311039 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c8bz\" (UniqueName: \"kubernetes.io/projected/9452a45a-41af-4942-9273-a8fa4671dd93-kube-api-access-7c8bz\") pod \"openshift-apiserver-operator-796bbdcf4f-lf2j2\" (UID: \"9452a45a-41af-4942-9273-a8fa4671dd93\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf2j2" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.334388 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-g6gh7" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.337198 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fwpn\" (UniqueName: \"kubernetes.io/projected/177eadcf-131c-445e-a714-ab16338b0b5e-kube-api-access-6fwpn\") pod \"openshift-controller-manager-operator-756b6f6bc6-gq8x8\" (UID: \"177eadcf-131c-445e-a714-ab16338b0b5e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gq8x8" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.354847 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gq8x8" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.374603 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgss9\" (UniqueName: \"kubernetes.io/projected/9233a625-86b6-4160-a8b8-7db5a1fe7d23-kube-api-access-qgss9\") pod \"openshift-config-operator-7777fb866f-4qdxn\" (UID: \"9233a625-86b6-4160-a8b8-7db5a1fe7d23\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4qdxn" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.393134 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvffw\" (UniqueName: \"kubernetes.io/projected/aef80081-75af-41e5-a0bf-f6a7d0d384bf-kube-api-access-vvffw\") pod \"controller-manager-879f6c89f-b2fw9\" (UID: \"aef80081-75af-41e5-a0bf-f6a7d0d384bf\") " pod="openshift-controller-manager/controller-manager-879f6c89f-b2fw9" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.413503 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-455fs\" (UniqueName: \"kubernetes.io/projected/571e2ec3-7e3c-4157-aefd-a6d0004de830-kube-api-access-455fs\") pod \"cluster-image-registry-operator-dc59b4c8b-x9lh8\" (UID: \"571e2ec3-7e3c-4157-aefd-a6d0004de830\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9lh8" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.433247 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fswdc\" (UniqueName: \"kubernetes.io/projected/de95d7ed-3895-43a6-b422-caae1114b0ec-kube-api-access-fswdc\") pod \"downloads-7954f5f757-6v588\" (UID: \"de95d7ed-3895-43a6-b422-caae1114b0ec\") " pod="openshift-console/downloads-7954f5f757-6v588" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.462788 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kxzm\" (UniqueName: \"kubernetes.io/projected/e2b4386d-728b-43e0-83e7-030a977d88dd-kube-api-access-2kxzm\") pod \"oauth-openshift-558db77b4-2tqr5\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.473602 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25rll\" (UniqueName: \"kubernetes.io/projected/fa1b34f5-88f0-49e2-be26-82e6e6ecf4e6-kube-api-access-25rll\") pod \"cluster-samples-operator-665b6dd947-z9dvk\" (UID: \"fa1b34f5-88f0-49e2-be26-82e6e6ecf4e6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z9dvk" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.492072 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf2j2" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.495633 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzbvt\" (UniqueName: \"kubernetes.io/projected/dcef4e8d-f319-4f69-8795-3102aebecd9c-kube-api-access-gzbvt\") pod \"route-controller-manager-6576b87f9c-zhxnq\" (UID: \"dcef4e8d-f319-4f69-8795-3102aebecd9c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.499830 4760 request.go:700] Waited for 1.997651535s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/serviceaccounts/authentication-operator/token Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.526614 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s5wx\" (UniqueName: \"kubernetes.io/projected/181dda13-0878-45ce-8585-e1799db10957-kube-api-access-5s5wx\") pod \"authentication-operator-69f744f599-lhclv\" (UID: \"181dda13-0878-45ce-8585-e1799db10957\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lhclv" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.534132 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/571e2ec3-7e3c-4157-aefd-a6d0004de830-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-x9lh8\" (UID: \"571e2ec3-7e3c-4157-aefd-a6d0004de830\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9lh8" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.540165 4760 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.547635 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-b2fw9" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.560063 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.561379 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.575319 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.575637 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gq8x8"] Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.579257 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.587474 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-g6gh7"] Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.588857 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4qdxn" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.601881 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.619704 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.628148 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-lhclv" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.639086 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.641168 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6v588" Feb 26 09:44:29 crc kubenswrapper[4760]: W0226 09:44:29.643644 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod177eadcf_131c_445e_a714_ab16338b0b5e.slice/crio-3c849ce611e9c6bd879b58548996e331c64b2886c499116abefad8e6295daec8 WatchSource:0}: Error finding container 3c849ce611e9c6bd879b58548996e331c64b2886c499116abefad8e6295daec8: Status 404 returned error can't find the container with id 3c849ce611e9c6bd879b58548996e331c64b2886c499116abefad8e6295daec8 Feb 26 09:44:29 crc kubenswrapper[4760]: W0226 09:44:29.645462 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8676521e_a09e_457c_bd7d_5acd1cc86b3a.slice/crio-5aa54e767705082fcb3166e23aaffef315d84c1079c683353514ad5f4174c980 WatchSource:0}: Error finding container 5aa54e767705082fcb3166e23aaffef315d84c1079c683353514ad5f4174c980: Status 404 returned error can't find the container with id 5aa54e767705082fcb3166e23aaffef315d84c1079c683353514ad5f4174c980 Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.647637 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z9dvk" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.654172 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf2j2"] Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.658269 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Feb 26 09:44:29 crc kubenswrapper[4760]: W0226 09:44:29.673968 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9452a45a_41af_4942_9273_a8fa4671dd93.slice/crio-48ae76f096212351b7d70164d688bf95a76b567eb0f02aeccc6c681799510b88 WatchSource:0}: Error finding container 48ae76f096212351b7d70164d688bf95a76b567eb0f02aeccc6c681799510b88: Status 404 returned error can't find the container with id 48ae76f096212351b7d70164d688bf95a76b567eb0f02aeccc6c681799510b88 Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.679314 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.699126 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.719025 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.739848 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.759709 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-b2fw9"] Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.769307 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2tqr5"] Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.777611 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af3d7b95-7fb4-4343-a019-1f30b1c65b28-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-d9zf8\" (UID: \"af3d7b95-7fb4-4343-a019-1f30b1c65b28\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9zf8" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.778508 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.783685 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1d1c8d0d-900e-4dd0-a880-1c6889483328-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-m8s4c\" (UID: \"1d1c8d0d-900e-4dd0-a880-1c6889483328\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m8s4c" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.786653 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9lh8" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.800452 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.809002 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/54d8e12b-f9b5-4c44-857a-582a2d507728-audit-policies\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.819716 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.834948 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/54d8e12b-f9b5-4c44-857a-582a2d507728-etcd-client\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.841263 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.849238 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq"] Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.858131 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.859155 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54d8e12b-f9b5-4c44-857a-582a2d507728-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.879589 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.891144 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4qdxn"] Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.891539 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54d8e12b-f9b5-4c44-857a-582a2d507728-serving-cert\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.898298 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 26 09:44:29 crc kubenswrapper[4760]: W0226 09:44:29.899816 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9233a625_86b6_4160_a8b8_7db5a1fe7d23.slice/crio-259ecacbdaffc0e42adffb36862c12b0b6a21f2d4fe061bbc1aa1641c0dd69a0 WatchSource:0}: Error finding container 259ecacbdaffc0e42adffb36862c12b0b6a21f2d4fe061bbc1aa1641c0dd69a0: Status 404 returned error can't find the container with id 259ecacbdaffc0e42adffb36862c12b0b6a21f2d4fe061bbc1aa1641c0dd69a0 Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.921167 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.929969 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1d1c8d0d-900e-4dd0-a880-1c6889483328-images\") pod \"machine-api-operator-5694c8668f-m8s4c\" (UID: \"1d1c8d0d-900e-4dd0-a880-1c6889483328\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m8s4c" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.937124 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-lhclv"] Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.940843 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.949372 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1c8d0d-900e-4dd0-a880-1c6889483328-config\") pod \"machine-api-operator-5694c8668f-m8s4c\" (UID: \"1d1c8d0d-900e-4dd0-a880-1c6889483328\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m8s4c" Feb 26 09:44:29 crc kubenswrapper[4760]: W0226 09:44:29.953358 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod181dda13_0878_45ce_8585_e1799db10957.slice/crio-3f3031363aac61550b304d937a03be36a93d02240d5a51c09a4c7836fbdf2620 WatchSource:0}: Error finding container 3f3031363aac61550b304d937a03be36a93d02240d5a51c09a4c7836fbdf2620: Status 404 returned error can't find the container with id 3f3031363aac61550b304d937a03be36a93d02240d5a51c09a4c7836fbdf2620 Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.958451 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 26 09:44:29 crc kubenswrapper[4760]: I0226 09:44:29.978274 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.001325 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 26 09:44:30 crc kubenswrapper[4760]: E0226 09:44:30.008883 4760 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Feb 26 09:44:30 crc kubenswrapper[4760]: E0226 09:44:30.008971 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/54d8e12b-f9b5-4c44-857a-582a2d507728-etcd-serving-ca podName:54d8e12b-f9b5-4c44-857a-582a2d507728 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:31.00895077 +0000 UTC m=+116.142896263 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/54d8e12b-f9b5-4c44-857a-582a2d507728-etcd-serving-ca") pod "apiserver-7bbb656c7d-njc94" (UID: "54d8e12b-f9b5-4c44-857a-582a2d507728") : failed to sync configmap cache: timed out waiting for the condition Feb 26 09:44:30 crc kubenswrapper[4760]: E0226 09:44:30.009818 4760 secret.go:188] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Feb 26 09:44:30 crc kubenswrapper[4760]: E0226 09:44:30.009872 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54d8e12b-f9b5-4c44-857a-582a2d507728-encryption-config podName:54d8e12b-f9b5-4c44-857a-582a2d507728 nodeName:}" failed. No retries permitted until 2026-02-26 09:44:31.009858566 +0000 UTC m=+116.143804059 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/54d8e12b-f9b5-4c44-857a-582a2d507728-encryption-config") pod "apiserver-7bbb656c7d-njc94" (UID: "54d8e12b-f9b5-4c44-857a-582a2d507728") : failed to sync secret cache: timed out waiting for the condition Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.019435 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.023231 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9lh8"] Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.027500 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9zf8" Feb 26 09:44:30 crc kubenswrapper[4760]: W0226 09:44:30.028637 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod571e2ec3_7e3c_4157_aefd_a6d0004de830.slice/crio-c5ceb85f9c8e6b7b8eed518be1e66ad3b11406f47bb7e2e4666c595fc5a38bcb WatchSource:0}: Error finding container c5ceb85f9c8e6b7b8eed518be1e66ad3b11406f47bb7e2e4666c595fc5a38bcb: Status 404 returned error can't find the container with id c5ceb85f9c8e6b7b8eed518be1e66ad3b11406f47bb7e2e4666c595fc5a38bcb Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.033254 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hdsd\" (UniqueName: \"kubernetes.io/projected/1d1c8d0d-900e-4dd0-a880-1c6889483328-kube-api-access-9hdsd\") pod \"machine-api-operator-5694c8668f-m8s4c\" (UID: \"1d1c8d0d-900e-4dd0-a880-1c6889483328\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m8s4c" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.038109 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.045392 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-m8s4c" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.048023 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws99t\" (UniqueName: \"kubernetes.io/projected/54d8e12b-f9b5-4c44-857a-582a2d507728-kube-api-access-ws99t\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.058242 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.069213 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-g6gh7" event={"ID":"8676521e-a09e-457c-bd7d-5acd1cc86b3a","Type":"ContainerStarted","Data":"61c36b3dc1d67f7a372d645ed99279fcf6e9c957c207cac9c3374367a11e7711"} Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.069263 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-g6gh7" event={"ID":"8676521e-a09e-457c-bd7d-5acd1cc86b3a","Type":"ContainerStarted","Data":"5aa54e767705082fcb3166e23aaffef315d84c1079c683353514ad5f4174c980"} Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.070658 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gq8x8" event={"ID":"177eadcf-131c-445e-a714-ab16338b0b5e","Type":"ContainerStarted","Data":"d470952aa98f012352d130fefd16191bb5f77e78d8a776eae74701d49eb69bd9"} Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.070688 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gq8x8" event={"ID":"177eadcf-131c-445e-a714-ab16338b0b5e","Type":"ContainerStarted","Data":"3c849ce611e9c6bd879b58548996e331c64b2886c499116abefad8e6295daec8"} Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.071399 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4qdxn" event={"ID":"9233a625-86b6-4160-a8b8-7db5a1fe7d23","Type":"ContainerStarted","Data":"259ecacbdaffc0e42adffb36862c12b0b6a21f2d4fe061bbc1aa1641c0dd69a0"} Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.072507 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-lhclv" event={"ID":"181dda13-0878-45ce-8585-e1799db10957","Type":"ContainerStarted","Data":"3f3031363aac61550b304d937a03be36a93d02240d5a51c09a4c7836fbdf2620"} Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.073357 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq" event={"ID":"dcef4e8d-f319-4f69-8795-3102aebecd9c","Type":"ContainerStarted","Data":"031d500d47638139cb2e733314c3f2cea09e2a5e8c293c8f3d85b26c783e2b67"} Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.074379 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" event={"ID":"e2b4386d-728b-43e0-83e7-030a977d88dd","Type":"ContainerStarted","Data":"3d68b0eb600f589fa9d62900f446ea39b4601836fed40a57a6cdd667241dcbef"} Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.075565 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf2j2" event={"ID":"9452a45a-41af-4942-9273-a8fa4671dd93","Type":"ContainerStarted","Data":"bc09b9e3b253eece15984960bd5bd314e272623dd03f010e852bab3259816277"} Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.075666 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf2j2" event={"ID":"9452a45a-41af-4942-9273-a8fa4671dd93","Type":"ContainerStarted","Data":"48ae76f096212351b7d70164d688bf95a76b567eb0f02aeccc6c681799510b88"} Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.076676 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-b2fw9" event={"ID":"aef80081-75af-41e5-a0bf-f6a7d0d384bf","Type":"ContainerStarted","Data":"bf0354bce3c60362e588ecf157fce30d8af9c612fa760541169d6bd17dc97d4f"} Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.077348 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9lh8" event={"ID":"571e2ec3-7e3c-4157-aefd-a6d0004de830","Type":"ContainerStarted","Data":"c5ceb85f9c8e6b7b8eed518be1e66ad3b11406f47bb7e2e4666c595fc5a38bcb"} Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.099825 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.120142 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.123254 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zwgc\" (UniqueName: \"kubernetes.io/projected/34f92d85-5b67-49b0-ac8c-2a16c55c7894-kube-api-access-7zwgc\") pod \"machine-config-controller-84d6567774-9s5xj\" (UID: \"34f92d85-5b67-49b0-ac8c-2a16c55c7894\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9s5xj" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.123282 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt288\" (UniqueName: \"kubernetes.io/projected/75bd609c-9135-4d9a-b974-a1b026ac6598-kube-api-access-wt288\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.123309 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps9vq\" (UniqueName: \"kubernetes.io/projected/405dce73-f4d5-4e66-8516-bece5511cc63-kube-api-access-ps9vq\") pod \"machine-approver-56656f9798-2sb8r\" (UID: \"405dce73-f4d5-4e66-8516-bece5511cc63\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2sb8r" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.123324 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f4d6fe9e-5990-4e8b-8b6f-efbac8600193-oauth-serving-cert\") pod \"console-f9d7485db-cb5r8\" (UID: \"f4d6fe9e-5990-4e8b-8b6f-efbac8600193\") " pod="openshift-console/console-f9d7485db-cb5r8" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.123359 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.123377 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c9a9e90-0849-4fb8-be6b-3cbc35e1982c-proxy-tls\") pod \"machine-config-operator-74547568cd-5xjtp\" (UID: \"8c9a9e90-0849-4fb8-be6b-3cbc35e1982c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5xjtp" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.123407 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c23c83e1-f20b-43ba-bdc8-29929236a384-metrics-certs\") pod \"router-default-5444994796-dv5m7\" (UID: \"c23c83e1-f20b-43ba-bdc8-29929236a384\") " pod="openshift-ingress/router-default-5444994796-dv5m7" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.123431 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c23c83e1-f20b-43ba-bdc8-29929236a384-default-certificate\") pod \"router-default-5444994796-dv5m7\" (UID: \"c23c83e1-f20b-43ba-bdc8-29929236a384\") " pod="openshift-ingress/router-default-5444994796-dv5m7" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.123445 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c23c83e1-f20b-43ba-bdc8-29929236a384-stats-auth\") pod \"router-default-5444994796-dv5m7\" (UID: \"c23c83e1-f20b-43ba-bdc8-29929236a384\") " pod="openshift-ingress/router-default-5444994796-dv5m7" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.123473 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9cb8ff53-c9e8-4626-a77e-160660696fbc-encryption-config\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.123491 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/12637361-4e28-43b0-9801-15ce0af1b647-node-bootstrap-token\") pod \"machine-config-server-w4g8h\" (UID: \"12637361-4e28-43b0-9801-15ce0af1b647\") " pod="openshift-machine-config-operator/machine-config-server-w4g8h" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.123505 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7zl2\" (UniqueName: \"kubernetes.io/projected/12637361-4e28-43b0-9801-15ce0af1b647-kube-api-access-x7zl2\") pod \"machine-config-server-w4g8h\" (UID: \"12637361-4e28-43b0-9801-15ce0af1b647\") " pod="openshift-machine-config-operator/machine-config-server-w4g8h" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.123519 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/efeb18fd-ff9f-4052-94d8-50d892b124b7-etcd-client\") pod \"etcd-operator-b45778765-79n6q\" (UID: \"efeb18fd-ff9f-4052-94d8-50d892b124b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-79n6q" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.123544 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzw9k\" (UniqueName: \"kubernetes.io/projected/0726f0c9-0bc5-42b5-bb78-af77ad91ecbb-kube-api-access-lzw9k\") pod \"marketplace-operator-79b997595-dlxqc\" (UID: \"0726f0c9-0bc5-42b5-bb78-af77ad91ecbb\") " pod="openshift-marketplace/marketplace-operator-79b997595-dlxqc" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.123564 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/efeb18fd-ff9f-4052-94d8-50d892b124b7-etcd-ca\") pod \"etcd-operator-b45778765-79n6q\" (UID: \"efeb18fd-ff9f-4052-94d8-50d892b124b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-79n6q" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.123604 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9cb8ff53-c9e8-4626-a77e-160660696fbc-node-pullsecrets\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.123644 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f1c99d97-783b-44bf-b113-d5e3ffbffd6d-srv-cert\") pod \"olm-operator-6b444d44fb-vbsd5\" (UID: \"f1c99d97-783b-44bf-b113-d5e3ffbffd6d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbsd5" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.123670 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75bd609c-9135-4d9a-b974-a1b026ac6598-registry-certificates\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.123684 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/12637361-4e28-43b0-9801-15ce0af1b647-certs\") pod \"machine-config-server-w4g8h\" (UID: \"12637361-4e28-43b0-9801-15ce0af1b647\") " pod="openshift-machine-config-operator/machine-config-server-w4g8h" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.123709 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgqwr\" (UniqueName: \"kubernetes.io/projected/c23c83e1-f20b-43ba-bdc8-29929236a384-kube-api-access-pgqwr\") pod \"router-default-5444994796-dv5m7\" (UID: \"c23c83e1-f20b-43ba-bdc8-29929236a384\") " pod="openshift-ingress/router-default-5444994796-dv5m7" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.123725 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lszhh\" (UniqueName: \"kubernetes.io/projected/3bda6877-458b-4632-8677-481e0926441b-kube-api-access-lszhh\") pod \"multus-admission-controller-857f4d67dd-mkg6j\" (UID: \"3bda6877-458b-4632-8677-481e0926441b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mkg6j" Feb 26 09:44:30 crc kubenswrapper[4760]: E0226 09:44:30.125398 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:30.625376476 +0000 UTC m=+115.759322039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.125835 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lq2f\" (UniqueName: \"kubernetes.io/projected/4dfd68f6-1819-4231-9f69-1fa39c594b27-kube-api-access-4lq2f\") pod \"kube-storage-version-migrator-operator-b67b599dd-nmktk\" (UID: \"4dfd68f6-1819-4231-9f69-1fa39c594b27\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmktk" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.126022 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c23c83e1-f20b-43ba-bdc8-29929236a384-service-ca-bundle\") pod \"router-default-5444994796-dv5m7\" (UID: \"c23c83e1-f20b-43ba-bdc8-29929236a384\") " pod="openshift-ingress/router-default-5444994796-dv5m7" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.126056 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/405dce73-f4d5-4e66-8516-bece5511cc63-machine-approver-tls\") pod \"machine-approver-56656f9798-2sb8r\" (UID: \"405dce73-f4d5-4e66-8516-bece5511cc63\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2sb8r" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.126118 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75bd609c-9135-4d9a-b974-a1b026ac6598-registry-tls\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.126180 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9cb8ff53-c9e8-4626-a77e-160660696fbc-image-import-ca\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.126224 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dfd68f6-1819-4231-9f69-1fa39c594b27-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nmktk\" (UID: \"4dfd68f6-1819-4231-9f69-1fa39c594b27\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmktk" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.126348 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4d6fe9e-5990-4e8b-8b6f-efbac8600193-trusted-ca-bundle\") pod \"console-f9d7485db-cb5r8\" (UID: \"f4d6fe9e-5990-4e8b-8b6f-efbac8600193\") " pod="openshift-console/console-f9d7485db-cb5r8" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.126496 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0726f0c9-0bc5-42b5-bb78-af77ad91ecbb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dlxqc\" (UID: \"0726f0c9-0bc5-42b5-bb78-af77ad91ecbb\") " pod="openshift-marketplace/marketplace-operator-79b997595-dlxqc" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.126798 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/34f92d85-5b67-49b0-ac8c-2a16c55c7894-proxy-tls\") pod \"machine-config-controller-84d6567774-9s5xj\" (UID: \"34f92d85-5b67-49b0-ac8c-2a16c55c7894\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9s5xj" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.126859 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9cb8ff53-c9e8-4626-a77e-160660696fbc-etcd-serving-ca\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.126928 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cb8ff53-c9e8-4626-a77e-160660696fbc-serving-cert\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.126981 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f4d6fe9e-5990-4e8b-8b6f-efbac8600193-console-serving-cert\") pod \"console-f9d7485db-cb5r8\" (UID: \"f4d6fe9e-5990-4e8b-8b6f-efbac8600193\") " pod="openshift-console/console-f9d7485db-cb5r8" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127005 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9cb8ff53-c9e8-4626-a77e-160660696fbc-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127026 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d8fda26-daaf-42fe-9cb8-6057f9c7abb8-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mpttf\" (UID: \"0d8fda26-daaf-42fe-9cb8-6057f9c7abb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mpttf" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127062 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8c9a9e90-0849-4fb8-be6b-3cbc35e1982c-images\") pod \"machine-config-operator-74547568cd-5xjtp\" (UID: \"8c9a9e90-0849-4fb8-be6b-3cbc35e1982c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5xjtp" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127082 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/efeb18fd-ff9f-4052-94d8-50d892b124b7-etcd-service-ca\") pod \"etcd-operator-b45778765-79n6q\" (UID: \"efeb18fd-ff9f-4052-94d8-50d892b124b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-79n6q" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127108 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efeb18fd-ff9f-4052-94d8-50d892b124b7-config\") pod \"etcd-operator-b45778765-79n6q\" (UID: \"efeb18fd-ff9f-4052-94d8-50d892b124b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-79n6q" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127155 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f4d6fe9e-5990-4e8b-8b6f-efbac8600193-console-config\") pod \"console-f9d7485db-cb5r8\" (UID: \"f4d6fe9e-5990-4e8b-8b6f-efbac8600193\") " pod="openshift-console/console-f9d7485db-cb5r8" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127182 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rhxd\" (UniqueName: \"kubernetes.io/projected/708d73ab-ebcd-4477-becc-dae46b14c8af-kube-api-access-6rhxd\") pod \"dns-operator-744455d44c-nm4ph\" (UID: \"708d73ab-ebcd-4477-becc-dae46b14c8af\") " pod="openshift-dns-operator/dns-operator-744455d44c-nm4ph" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127198 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cb8ff53-c9e8-4626-a77e-160660696fbc-config\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127214 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkt7p\" (UniqueName: \"kubernetes.io/projected/efeb18fd-ff9f-4052-94d8-50d892b124b7-kube-api-access-fkt7p\") pod \"etcd-operator-b45778765-79n6q\" (UID: \"efeb18fd-ff9f-4052-94d8-50d892b124b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-79n6q" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127248 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75bd609c-9135-4d9a-b974-a1b026ac6598-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127266 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75bd609c-9135-4d9a-b974-a1b026ac6598-bound-sa-token\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127325 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77q7r\" (UniqueName: \"kubernetes.io/projected/3b4ba74c-b04c-4def-be1a-4e1304730727-kube-api-access-77q7r\") pod \"migrator-59844c95c7-zgghc\" (UID: \"3b4ba74c-b04c-4def-be1a-4e1304730727\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zgghc" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127363 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/405dce73-f4d5-4e66-8516-bece5511cc63-config\") pod \"machine-approver-56656f9798-2sb8r\" (UID: \"405dce73-f4d5-4e66-8516-bece5511cc63\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2sb8r" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127382 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efeb18fd-ff9f-4052-94d8-50d892b124b7-serving-cert\") pod \"etcd-operator-b45778765-79n6q\" (UID: \"efeb18fd-ff9f-4052-94d8-50d892b124b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-79n6q" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127447 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dfd68f6-1819-4231-9f69-1fa39c594b27-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nmktk\" (UID: \"4dfd68f6-1819-4231-9f69-1fa39c594b27\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmktk" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127484 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cceb99fc-acfa-475b-b79c-6209f5040232-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tft7j\" (UID: \"cceb99fc-acfa-475b-b79c-6209f5040232\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tft7j" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127506 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75bd609c-9135-4d9a-b974-a1b026ac6598-trusted-ca\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127524 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45j9l\" (UniqueName: \"kubernetes.io/projected/4dc1b5d5-817c-44bd-a819-0d09cae65ce9-kube-api-access-45j9l\") pod \"control-plane-machine-set-operator-78cbb6b69f-c95n7\" (UID: \"4dc1b5d5-817c-44bd-a819-0d09cae65ce9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c95n7" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127654 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m4nc\" (UniqueName: \"kubernetes.io/projected/8c9a9e90-0849-4fb8-be6b-3cbc35e1982c-kube-api-access-4m4nc\") pod \"machine-config-operator-74547568cd-5xjtp\" (UID: \"8c9a9e90-0849-4fb8-be6b-3cbc35e1982c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5xjtp" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127685 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cceb99fc-acfa-475b-b79c-6209f5040232-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tft7j\" (UID: \"cceb99fc-acfa-475b-b79c-6209f5040232\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tft7j" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127701 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f4d6fe9e-5990-4e8b-8b6f-efbac8600193-service-ca\") pod \"console-f9d7485db-cb5r8\" (UID: \"f4d6fe9e-5990-4e8b-8b6f-efbac8600193\") " pod="openshift-console/console-f9d7485db-cb5r8" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127737 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/34f92d85-5b67-49b0-ac8c-2a16c55c7894-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9s5xj\" (UID: \"34f92d85-5b67-49b0-ac8c-2a16c55c7894\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9s5xj" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127768 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vxrs\" (UniqueName: \"kubernetes.io/projected/f1c99d97-783b-44bf-b113-d5e3ffbffd6d-kube-api-access-5vxrs\") pod \"olm-operator-6b444d44fb-vbsd5\" (UID: \"f1c99d97-783b-44bf-b113-d5e3ffbffd6d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbsd5" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127784 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kncl2\" (UniqueName: \"kubernetes.io/projected/f4d6fe9e-5990-4e8b-8b6f-efbac8600193-kube-api-access-kncl2\") pod \"console-f9d7485db-cb5r8\" (UID: \"f4d6fe9e-5990-4e8b-8b6f-efbac8600193\") " pod="openshift-console/console-f9d7485db-cb5r8" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127801 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4dc1b5d5-817c-44bd-a819-0d09cae65ce9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-c95n7\" (UID: \"4dc1b5d5-817c-44bd-a819-0d09cae65ce9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c95n7" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127819 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dabee30a-f36d-4123-87b9-71a576d3cc2a-config\") pod \"kube-apiserver-operator-766d6c64bb-x5flk\" (UID: \"dabee30a-f36d-4123-87b9-71a576d3cc2a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5flk" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127835 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f1c99d97-783b-44bf-b113-d5e3ffbffd6d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vbsd5\" (UID: \"f1c99d97-783b-44bf-b113-d5e3ffbffd6d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbsd5" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127863 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dabee30a-f36d-4123-87b9-71a576d3cc2a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-x5flk\" (UID: \"dabee30a-f36d-4123-87b9-71a576d3cc2a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5flk" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127942 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9cb8ff53-c9e8-4626-a77e-160660696fbc-audit\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127959 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c9a9e90-0849-4fb8-be6b-3cbc35e1982c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-5xjtp\" (UID: \"8c9a9e90-0849-4fb8-be6b-3cbc35e1982c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5xjtp" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127978 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0d8fda26-daaf-42fe-9cb8-6057f9c7abb8-metrics-tls\") pod \"ingress-operator-5b745b69d9-mpttf\" (UID: \"0d8fda26-daaf-42fe-9cb8-6057f9c7abb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mpttf" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.127994 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b29g\" (UniqueName: \"kubernetes.io/projected/0d8fda26-daaf-42fe-9cb8-6057f9c7abb8-kube-api-access-6b29g\") pod \"ingress-operator-5b745b69d9-mpttf\" (UID: \"0d8fda26-daaf-42fe-9cb8-6057f9c7abb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mpttf" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.128012 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75bd609c-9135-4d9a-b974-a1b026ac6598-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.128027 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/708d73ab-ebcd-4477-becc-dae46b14c8af-metrics-tls\") pod \"dns-operator-744455d44c-nm4ph\" (UID: \"708d73ab-ebcd-4477-becc-dae46b14c8af\") " pod="openshift-dns-operator/dns-operator-744455d44c-nm4ph" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.128043 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/405dce73-f4d5-4e66-8516-bece5511cc63-auth-proxy-config\") pod \"machine-approver-56656f9798-2sb8r\" (UID: \"405dce73-f4d5-4e66-8516-bece5511cc63\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2sb8r" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.128073 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d8fda26-daaf-42fe-9cb8-6057f9c7abb8-trusted-ca\") pod \"ingress-operator-5b745b69d9-mpttf\" (UID: \"0d8fda26-daaf-42fe-9cb8-6057f9c7abb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mpttf" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.128100 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dabee30a-f36d-4123-87b9-71a576d3cc2a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-x5flk\" (UID: \"dabee30a-f36d-4123-87b9-71a576d3cc2a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5flk" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.128116 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9cb8ff53-c9e8-4626-a77e-160660696fbc-etcd-client\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.128249 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0726f0c9-0bc5-42b5-bb78-af77ad91ecbb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dlxqc\" (UID: \"0726f0c9-0bc5-42b5-bb78-af77ad91ecbb\") " pod="openshift-marketplace/marketplace-operator-79b997595-dlxqc" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.128302 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f4d6fe9e-5990-4e8b-8b6f-efbac8600193-console-oauth-config\") pod \"console-f9d7485db-cb5r8\" (UID: \"f4d6fe9e-5990-4e8b-8b6f-efbac8600193\") " pod="openshift-console/console-f9d7485db-cb5r8" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.128320 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3bda6877-458b-4632-8677-481e0926441b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mkg6j\" (UID: \"3bda6877-458b-4632-8677-481e0926441b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mkg6j" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.128348 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znqxh\" (UniqueName: \"kubernetes.io/projected/9cb8ff53-c9e8-4626-a77e-160660696fbc-kube-api-access-znqxh\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.128365 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cceb99fc-acfa-475b-b79c-6209f5040232-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tft7j\" (UID: \"cceb99fc-acfa-475b-b79c-6209f5040232\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tft7j" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.128381 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9cb8ff53-c9e8-4626-a77e-160660696fbc-audit-dir\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.187531 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6v588"] Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.188416 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z9dvk"] Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.229610 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.229840 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9cb8ff53-c9e8-4626-a77e-160660696fbc-encryption-config\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.229875 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cd519bc0-6b98-495a-bc74-e515b87ec6c1-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-dpdz4\" (UID: \"cd519bc0-6b98-495a-bc74-e515b87ec6c1\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.229901 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84tvs\" (UniqueName: \"kubernetes.io/projected/cd519bc0-6b98-495a-bc74-e515b87ec6c1-kube-api-access-84tvs\") pod \"cni-sysctl-allowlist-ds-dpdz4\" (UID: \"cd519bc0-6b98-495a-bc74-e515b87ec6c1\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.229924 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/12637361-4e28-43b0-9801-15ce0af1b647-node-bootstrap-token\") pod \"machine-config-server-w4g8h\" (UID: \"12637361-4e28-43b0-9801-15ce0af1b647\") " pod="openshift-machine-config-operator/machine-config-server-w4g8h" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.229945 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7zl2\" (UniqueName: \"kubernetes.io/projected/12637361-4e28-43b0-9801-15ce0af1b647-kube-api-access-x7zl2\") pod \"machine-config-server-w4g8h\" (UID: \"12637361-4e28-43b0-9801-15ce0af1b647\") " pod="openshift-machine-config-operator/machine-config-server-w4g8h" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.229966 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/efeb18fd-ff9f-4052-94d8-50d892b124b7-etcd-client\") pod \"etcd-operator-b45778765-79n6q\" (UID: \"efeb18fd-ff9f-4052-94d8-50d892b124b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-79n6q" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230002 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftx9t\" (UniqueName: \"kubernetes.io/projected/2105337b-ddda-4a9a-bbd8-9442b17eedf5-kube-api-access-ftx9t\") pod \"ingress-canary-kmqvg\" (UID: \"2105337b-ddda-4a9a-bbd8-9442b17eedf5\") " pod="openshift-ingress-canary/ingress-canary-kmqvg" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230024 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/efeb18fd-ff9f-4052-94d8-50d892b124b7-etcd-ca\") pod \"etcd-operator-b45778765-79n6q\" (UID: \"efeb18fd-ff9f-4052-94d8-50d892b124b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-79n6q" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230042 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzw9k\" (UniqueName: \"kubernetes.io/projected/0726f0c9-0bc5-42b5-bb78-af77ad91ecbb-kube-api-access-lzw9k\") pod \"marketplace-operator-79b997595-dlxqc\" (UID: \"0726f0c9-0bc5-42b5-bb78-af77ad91ecbb\") " pod="openshift-marketplace/marketplace-operator-79b997595-dlxqc" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230061 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9cb8ff53-c9e8-4626-a77e-160660696fbc-node-pullsecrets\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230079 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f1c99d97-783b-44bf-b113-d5e3ffbffd6d-srv-cert\") pod \"olm-operator-6b444d44fb-vbsd5\" (UID: \"f1c99d97-783b-44bf-b113-d5e3ffbffd6d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbsd5" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230099 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/12637361-4e28-43b0-9801-15ce0af1b647-certs\") pod \"machine-config-server-w4g8h\" (UID: \"12637361-4e28-43b0-9801-15ce0af1b647\") " pod="openshift-machine-config-operator/machine-config-server-w4g8h" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230120 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cd519bc0-6b98-495a-bc74-e515b87ec6c1-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-dpdz4\" (UID: \"cd519bc0-6b98-495a-bc74-e515b87ec6c1\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230138 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd389c74-3cf0-4a69-936d-ce93a26d2328-config\") pod \"service-ca-operator-777779d784-l4drh\" (UID: \"cd389c74-3cf0-4a69-936d-ce93a26d2328\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l4drh" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230164 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75bd609c-9135-4d9a-b974-a1b026ac6598-registry-certificates\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230180 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lszhh\" (UniqueName: \"kubernetes.io/projected/3bda6877-458b-4632-8677-481e0926441b-kube-api-access-lszhh\") pod \"multus-admission-controller-857f4d67dd-mkg6j\" (UID: \"3bda6877-458b-4632-8677-481e0926441b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mkg6j" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230207 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgqwr\" (UniqueName: \"kubernetes.io/projected/c23c83e1-f20b-43ba-bdc8-29929236a384-kube-api-access-pgqwr\") pod \"router-default-5444994796-dv5m7\" (UID: \"c23c83e1-f20b-43ba-bdc8-29929236a384\") " pod="openshift-ingress/router-default-5444994796-dv5m7" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230225 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lq2f\" (UniqueName: \"kubernetes.io/projected/4dfd68f6-1819-4231-9f69-1fa39c594b27-kube-api-access-4lq2f\") pod \"kube-storage-version-migrator-operator-b67b599dd-nmktk\" (UID: \"4dfd68f6-1819-4231-9f69-1fa39c594b27\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmktk" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230240 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c23c83e1-f20b-43ba-bdc8-29929236a384-service-ca-bundle\") pod \"router-default-5444994796-dv5m7\" (UID: \"c23c83e1-f20b-43ba-bdc8-29929236a384\") " pod="openshift-ingress/router-default-5444994796-dv5m7" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230254 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/405dce73-f4d5-4e66-8516-bece5511cc63-machine-approver-tls\") pod \"machine-approver-56656f9798-2sb8r\" (UID: \"405dce73-f4d5-4e66-8516-bece5511cc63\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2sb8r" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230270 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2dst\" (UniqueName: \"kubernetes.io/projected/53613d0e-5df3-4b18-8ebd-eb64ad64d487-kube-api-access-b2dst\") pod \"csi-hostpathplugin-q4p5w\" (UID: \"53613d0e-5df3-4b18-8ebd-eb64ad64d487\") " pod="hostpath-provisioner/csi-hostpathplugin-q4p5w" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230286 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75bd609c-9135-4d9a-b974-a1b026ac6598-registry-tls\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230303 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9cb8ff53-c9e8-4626-a77e-160660696fbc-image-import-ca\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230317 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dfd68f6-1819-4231-9f69-1fa39c594b27-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nmktk\" (UID: \"4dfd68f6-1819-4231-9f69-1fa39c594b27\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmktk" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230336 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4d6fe9e-5990-4e8b-8b6f-efbac8600193-trusted-ca-bundle\") pod \"console-f9d7485db-cb5r8\" (UID: \"f4d6fe9e-5990-4e8b-8b6f-efbac8600193\") " pod="openshift-console/console-f9d7485db-cb5r8" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230353 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0726f0c9-0bc5-42b5-bb78-af77ad91ecbb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dlxqc\" (UID: \"0726f0c9-0bc5-42b5-bb78-af77ad91ecbb\") " pod="openshift-marketplace/marketplace-operator-79b997595-dlxqc" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230397 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cm2p\" (UniqueName: \"kubernetes.io/projected/28cf1469-8d38-4d73-ab81-8e7a3eb86314-kube-api-access-5cm2p\") pod \"service-ca-9c57cc56f-44s9q\" (UID: \"28cf1469-8d38-4d73-ab81-8e7a3eb86314\") " pod="openshift-service-ca/service-ca-9c57cc56f-44s9q" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230429 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/34f92d85-5b67-49b0-ac8c-2a16c55c7894-proxy-tls\") pod \"machine-config-controller-84d6567774-9s5xj\" (UID: \"34f92d85-5b67-49b0-ac8c-2a16c55c7894\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9s5xj" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230445 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9cb8ff53-c9e8-4626-a77e-160660696fbc-etcd-serving-ca\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230462 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cb8ff53-c9e8-4626-a77e-160660696fbc-serving-cert\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230478 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f4d6fe9e-5990-4e8b-8b6f-efbac8600193-console-serving-cert\") pod \"console-f9d7485db-cb5r8\" (UID: \"f4d6fe9e-5990-4e8b-8b6f-efbac8600193\") " pod="openshift-console/console-f9d7485db-cb5r8" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230502 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/53613d0e-5df3-4b18-8ebd-eb64ad64d487-socket-dir\") pod \"csi-hostpathplugin-q4p5w\" (UID: \"53613d0e-5df3-4b18-8ebd-eb64ad64d487\") " pod="hostpath-provisioner/csi-hostpathplugin-q4p5w" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230516 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9cb8ff53-c9e8-4626-a77e-160660696fbc-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230530 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d8fda26-daaf-42fe-9cb8-6057f9c7abb8-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mpttf\" (UID: \"0d8fda26-daaf-42fe-9cb8-6057f9c7abb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mpttf" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230547 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjbfs\" (UniqueName: \"kubernetes.io/projected/26ffe756-78b8-4546-9587-9d031709ba56-kube-api-access-rjbfs\") pod \"package-server-manager-789f6589d5-nnr4g\" (UID: \"26ffe756-78b8-4546-9587-9d031709ba56\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnr4g" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230562 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8c9a9e90-0849-4fb8-be6b-3cbc35e1982c-images\") pod \"machine-config-operator-74547568cd-5xjtp\" (UID: \"8c9a9e90-0849-4fb8-be6b-3cbc35e1982c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5xjtp" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230599 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/efeb18fd-ff9f-4052-94d8-50d892b124b7-etcd-service-ca\") pod \"etcd-operator-b45778765-79n6q\" (UID: \"efeb18fd-ff9f-4052-94d8-50d892b124b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-79n6q" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230620 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp7bm\" (UniqueName: \"kubernetes.io/projected/cd389c74-3cf0-4a69-936d-ce93a26d2328-kube-api-access-lp7bm\") pod \"service-ca-operator-777779d784-l4drh\" (UID: \"cd389c74-3cf0-4a69-936d-ce93a26d2328\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l4drh" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230635 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f4d6fe9e-5990-4e8b-8b6f-efbac8600193-console-config\") pod \"console-f9d7485db-cb5r8\" (UID: \"f4d6fe9e-5990-4e8b-8b6f-efbac8600193\") " pod="openshift-console/console-f9d7485db-cb5r8" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230651 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/26ffe756-78b8-4546-9587-9d031709ba56-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nnr4g\" (UID: \"26ffe756-78b8-4546-9587-9d031709ba56\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnr4g" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230668 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/71578c2a-f1cc-458d-9f95-058597d6a4b3-profile-collector-cert\") pod \"catalog-operator-68c6474976-zr9kg\" (UID: \"71578c2a-f1cc-458d-9f95-058597d6a4b3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zr9kg" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230686 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rhxd\" (UniqueName: \"kubernetes.io/projected/708d73ab-ebcd-4477-becc-dae46b14c8af-kube-api-access-6rhxd\") pod \"dns-operator-744455d44c-nm4ph\" (UID: \"708d73ab-ebcd-4477-becc-dae46b14c8af\") " pod="openshift-dns-operator/dns-operator-744455d44c-nm4ph" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230701 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efeb18fd-ff9f-4052-94d8-50d892b124b7-config\") pod \"etcd-operator-b45778765-79n6q\" (UID: \"efeb18fd-ff9f-4052-94d8-50d892b124b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-79n6q" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230716 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cb8ff53-c9e8-4626-a77e-160660696fbc-config\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230730 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd389c74-3cf0-4a69-936d-ce93a26d2328-serving-cert\") pod \"service-ca-operator-777779d784-l4drh\" (UID: \"cd389c74-3cf0-4a69-936d-ce93a26d2328\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l4drh" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230752 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkt7p\" (UniqueName: \"kubernetes.io/projected/efeb18fd-ff9f-4052-94d8-50d892b124b7-kube-api-access-fkt7p\") pod \"etcd-operator-b45778765-79n6q\" (UID: \"efeb18fd-ff9f-4052-94d8-50d892b124b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-79n6q" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230767 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75bd609c-9135-4d9a-b974-a1b026ac6598-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230783 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75bd609c-9135-4d9a-b974-a1b026ac6598-bound-sa-token\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230800 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2105337b-ddda-4a9a-bbd8-9442b17eedf5-cert\") pod \"ingress-canary-kmqvg\" (UID: \"2105337b-ddda-4a9a-bbd8-9442b17eedf5\") " pod="openshift-ingress-canary/ingress-canary-kmqvg" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230816 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77q7r\" (UniqueName: \"kubernetes.io/projected/3b4ba74c-b04c-4def-be1a-4e1304730727-kube-api-access-77q7r\") pod \"migrator-59844c95c7-zgghc\" (UID: \"3b4ba74c-b04c-4def-be1a-4e1304730727\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zgghc" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230831 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efeb18fd-ff9f-4052-94d8-50d892b124b7-serving-cert\") pod \"etcd-operator-b45778765-79n6q\" (UID: \"efeb18fd-ff9f-4052-94d8-50d892b124b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-79n6q" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230846 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/405dce73-f4d5-4e66-8516-bece5511cc63-config\") pod \"machine-approver-56656f9798-2sb8r\" (UID: \"405dce73-f4d5-4e66-8516-bece5511cc63\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2sb8r" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230861 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b0e52d65-bfe6-4f19-a0a1-2cdf4bf69405-metrics-tls\") pod \"dns-default-hczgn\" (UID: \"b0e52d65-bfe6-4f19-a0a1-2cdf4bf69405\") " pod="openshift-dns/dns-default-hczgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230876 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/71578c2a-f1cc-458d-9f95-058597d6a4b3-srv-cert\") pod \"catalog-operator-68c6474976-zr9kg\" (UID: \"71578c2a-f1cc-458d-9f95-058597d6a4b3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zr9kg" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230894 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dfd68f6-1819-4231-9f69-1fa39c594b27-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nmktk\" (UID: \"4dfd68f6-1819-4231-9f69-1fa39c594b27\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmktk" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230911 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cceb99fc-acfa-475b-b79c-6209f5040232-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tft7j\" (UID: \"cceb99fc-acfa-475b-b79c-6209f5040232\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tft7j" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230945 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75bd609c-9135-4d9a-b974-a1b026ac6598-trusted-ca\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230961 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45j9l\" (UniqueName: \"kubernetes.io/projected/4dc1b5d5-817c-44bd-a819-0d09cae65ce9-kube-api-access-45j9l\") pod \"control-plane-machine-set-operator-78cbb6b69f-c95n7\" (UID: \"4dc1b5d5-817c-44bd-a819-0d09cae65ce9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c95n7" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230976 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m4nc\" (UniqueName: \"kubernetes.io/projected/8c9a9e90-0849-4fb8-be6b-3cbc35e1982c-kube-api-access-4m4nc\") pod \"machine-config-operator-74547568cd-5xjtp\" (UID: \"8c9a9e90-0849-4fb8-be6b-3cbc35e1982c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5xjtp" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.230992 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ss4h\" (UniqueName: \"kubernetes.io/projected/c388b29a-9aad-47a6-ba5d-8eabdb4480a6-kube-api-access-2ss4h\") pod \"collect-profiles-29534970-r2bbh\" (UID: \"c388b29a-9aad-47a6-ba5d-8eabdb4480a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-r2bbh" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231011 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/53613d0e-5df3-4b18-8ebd-eb64ad64d487-mountpoint-dir\") pod \"csi-hostpathplugin-q4p5w\" (UID: \"53613d0e-5df3-4b18-8ebd-eb64ad64d487\") " pod="hostpath-provisioner/csi-hostpathplugin-q4p5w" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231029 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cceb99fc-acfa-475b-b79c-6209f5040232-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tft7j\" (UID: \"cceb99fc-acfa-475b-b79c-6209f5040232\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tft7j" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231045 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f4d6fe9e-5990-4e8b-8b6f-efbac8600193-service-ca\") pod \"console-f9d7485db-cb5r8\" (UID: \"f4d6fe9e-5990-4e8b-8b6f-efbac8600193\") " pod="openshift-console/console-f9d7485db-cb5r8" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231060 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c388b29a-9aad-47a6-ba5d-8eabdb4480a6-config-volume\") pod \"collect-profiles-29534970-r2bbh\" (UID: \"c388b29a-9aad-47a6-ba5d-8eabdb4480a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-r2bbh" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231077 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/34f92d85-5b67-49b0-ac8c-2a16c55c7894-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9s5xj\" (UID: \"34f92d85-5b67-49b0-ac8c-2a16c55c7894\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9s5xj" Feb 26 09:44:30 crc kubenswrapper[4760]: E0226 09:44:30.231093 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:30.731076286 +0000 UTC m=+115.865021779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231152 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhwqx\" (UniqueName: \"kubernetes.io/projected/913c298a-1dbe-440a-afb0-3ba32cf96a8c-kube-api-access-zhwqx\") pod \"packageserver-d55dfcdfc-9fdhq\" (UID: \"913c298a-1dbe-440a-afb0-3ba32cf96a8c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9fdhq" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231266 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vxrs\" (UniqueName: \"kubernetes.io/projected/f1c99d97-783b-44bf-b113-d5e3ffbffd6d-kube-api-access-5vxrs\") pod \"olm-operator-6b444d44fb-vbsd5\" (UID: \"f1c99d97-783b-44bf-b113-d5e3ffbffd6d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbsd5" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231301 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/53613d0e-5df3-4b18-8ebd-eb64ad64d487-csi-data-dir\") pod \"csi-hostpathplugin-q4p5w\" (UID: \"53613d0e-5df3-4b18-8ebd-eb64ad64d487\") " pod="hostpath-provisioner/csi-hostpathplugin-q4p5w" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231325 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4dc1b5d5-817c-44bd-a819-0d09cae65ce9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-c95n7\" (UID: \"4dc1b5d5-817c-44bd-a819-0d09cae65ce9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c95n7" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231345 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kncl2\" (UniqueName: \"kubernetes.io/projected/f4d6fe9e-5990-4e8b-8b6f-efbac8600193-kube-api-access-kncl2\") pod \"console-f9d7485db-cb5r8\" (UID: \"f4d6fe9e-5990-4e8b-8b6f-efbac8600193\") " pod="openshift-console/console-f9d7485db-cb5r8" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231362 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f1c99d97-783b-44bf-b113-d5e3ffbffd6d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vbsd5\" (UID: \"f1c99d97-783b-44bf-b113-d5e3ffbffd6d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbsd5" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231381 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/53613d0e-5df3-4b18-8ebd-eb64ad64d487-plugins-dir\") pod \"csi-hostpathplugin-q4p5w\" (UID: \"53613d0e-5df3-4b18-8ebd-eb64ad64d487\") " pod="hostpath-provisioner/csi-hostpathplugin-q4p5w" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231397 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/913c298a-1dbe-440a-afb0-3ba32cf96a8c-webhook-cert\") pod \"packageserver-d55dfcdfc-9fdhq\" (UID: \"913c298a-1dbe-440a-afb0-3ba32cf96a8c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9fdhq" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231412 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c388b29a-9aad-47a6-ba5d-8eabdb4480a6-secret-volume\") pod \"collect-profiles-29534970-r2bbh\" (UID: \"c388b29a-9aad-47a6-ba5d-8eabdb4480a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-r2bbh" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231437 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dabee30a-f36d-4123-87b9-71a576d3cc2a-config\") pod \"kube-apiserver-operator-766d6c64bb-x5flk\" (UID: \"dabee30a-f36d-4123-87b9-71a576d3cc2a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5flk" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231453 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dabee30a-f36d-4123-87b9-71a576d3cc2a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-x5flk\" (UID: \"dabee30a-f36d-4123-87b9-71a576d3cc2a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5flk" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231471 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9cb8ff53-c9e8-4626-a77e-160660696fbc-audit\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231489 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c9a9e90-0849-4fb8-be6b-3cbc35e1982c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-5xjtp\" (UID: \"8c9a9e90-0849-4fb8-be6b-3cbc35e1982c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5xjtp" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231505 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/28cf1469-8d38-4d73-ab81-8e7a3eb86314-signing-cabundle\") pod \"service-ca-9c57cc56f-44s9q\" (UID: \"28cf1469-8d38-4d73-ab81-8e7a3eb86314\") " pod="openshift-service-ca/service-ca-9c57cc56f-44s9q" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231531 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b29g\" (UniqueName: \"kubernetes.io/projected/0d8fda26-daaf-42fe-9cb8-6057f9c7abb8-kube-api-access-6b29g\") pod \"ingress-operator-5b745b69d9-mpttf\" (UID: \"0d8fda26-daaf-42fe-9cb8-6057f9c7abb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mpttf" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231551 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0d8fda26-daaf-42fe-9cb8-6057f9c7abb8-metrics-tls\") pod \"ingress-operator-5b745b69d9-mpttf\" (UID: \"0d8fda26-daaf-42fe-9cb8-6057f9c7abb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mpttf" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231605 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/708d73ab-ebcd-4477-becc-dae46b14c8af-metrics-tls\") pod \"dns-operator-744455d44c-nm4ph\" (UID: \"708d73ab-ebcd-4477-becc-dae46b14c8af\") " pod="openshift-dns-operator/dns-operator-744455d44c-nm4ph" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231633 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/405dce73-f4d5-4e66-8516-bece5511cc63-auth-proxy-config\") pod \"machine-approver-56656f9798-2sb8r\" (UID: \"405dce73-f4d5-4e66-8516-bece5511cc63\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2sb8r" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231652 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/913c298a-1dbe-440a-afb0-3ba32cf96a8c-apiservice-cert\") pod \"packageserver-d55dfcdfc-9fdhq\" (UID: \"913c298a-1dbe-440a-afb0-3ba32cf96a8c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9fdhq" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231668 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/28cf1469-8d38-4d73-ab81-8e7a3eb86314-signing-key\") pod \"service-ca-9c57cc56f-44s9q\" (UID: \"28cf1469-8d38-4d73-ab81-8e7a3eb86314\") " pod="openshift-service-ca/service-ca-9c57cc56f-44s9q" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231690 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75bd609c-9135-4d9a-b974-a1b026ac6598-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231726 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fstb4\" (UniqueName: \"kubernetes.io/projected/71578c2a-f1cc-458d-9f95-058597d6a4b3-kube-api-access-fstb4\") pod \"catalog-operator-68c6474976-zr9kg\" (UID: \"71578c2a-f1cc-458d-9f95-058597d6a4b3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zr9kg" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231759 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d8fda26-daaf-42fe-9cb8-6057f9c7abb8-trusted-ca\") pod \"ingress-operator-5b745b69d9-mpttf\" (UID: \"0d8fda26-daaf-42fe-9cb8-6057f9c7abb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mpttf" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231781 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvcv6\" (UniqueName: \"kubernetes.io/projected/b0e52d65-bfe6-4f19-a0a1-2cdf4bf69405-kube-api-access-jvcv6\") pod \"dns-default-hczgn\" (UID: \"b0e52d65-bfe6-4f19-a0a1-2cdf4bf69405\") " pod="openshift-dns/dns-default-hczgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231809 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9cb8ff53-c9e8-4626-a77e-160660696fbc-etcd-client\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231832 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0726f0c9-0bc5-42b5-bb78-af77ad91ecbb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dlxqc\" (UID: \"0726f0c9-0bc5-42b5-bb78-af77ad91ecbb\") " pod="openshift-marketplace/marketplace-operator-79b997595-dlxqc" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231860 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dabee30a-f36d-4123-87b9-71a576d3cc2a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-x5flk\" (UID: \"dabee30a-f36d-4123-87b9-71a576d3cc2a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5flk" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231902 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f4d6fe9e-5990-4e8b-8b6f-efbac8600193-console-oauth-config\") pod \"console-f9d7485db-cb5r8\" (UID: \"f4d6fe9e-5990-4e8b-8b6f-efbac8600193\") " pod="openshift-console/console-f9d7485db-cb5r8" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231923 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/53613d0e-5df3-4b18-8ebd-eb64ad64d487-registration-dir\") pod \"csi-hostpathplugin-q4p5w\" (UID: \"53613d0e-5df3-4b18-8ebd-eb64ad64d487\") " pod="hostpath-provisioner/csi-hostpathplugin-q4p5w" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231946 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0e52d65-bfe6-4f19-a0a1-2cdf4bf69405-config-volume\") pod \"dns-default-hczgn\" (UID: \"b0e52d65-bfe6-4f19-a0a1-2cdf4bf69405\") " pod="openshift-dns/dns-default-hczgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.231964 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/cd519bc0-6b98-495a-bc74-e515b87ec6c1-ready\") pod \"cni-sysctl-allowlist-ds-dpdz4\" (UID: \"cd519bc0-6b98-495a-bc74-e515b87ec6c1\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.232015 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3bda6877-458b-4632-8677-481e0926441b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mkg6j\" (UID: \"3bda6877-458b-4632-8677-481e0926441b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mkg6j" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.232039 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cceb99fc-acfa-475b-b79c-6209f5040232-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tft7j\" (UID: \"cceb99fc-acfa-475b-b79c-6209f5040232\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tft7j" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.232077 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9cb8ff53-c9e8-4626-a77e-160660696fbc-audit-dir\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.232098 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znqxh\" (UniqueName: \"kubernetes.io/projected/9cb8ff53-c9e8-4626-a77e-160660696fbc-kube-api-access-znqxh\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.232124 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zwgc\" (UniqueName: \"kubernetes.io/projected/34f92d85-5b67-49b0-ac8c-2a16c55c7894-kube-api-access-7zwgc\") pod \"machine-config-controller-84d6567774-9s5xj\" (UID: \"34f92d85-5b67-49b0-ac8c-2a16c55c7894\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9s5xj" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.232148 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt288\" (UniqueName: \"kubernetes.io/projected/75bd609c-9135-4d9a-b974-a1b026ac6598-kube-api-access-wt288\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.232170 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/913c298a-1dbe-440a-afb0-3ba32cf96a8c-tmpfs\") pod \"packageserver-d55dfcdfc-9fdhq\" (UID: \"913c298a-1dbe-440a-afb0-3ba32cf96a8c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9fdhq" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.232193 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps9vq\" (UniqueName: \"kubernetes.io/projected/405dce73-f4d5-4e66-8516-bece5511cc63-kube-api-access-ps9vq\") pod \"machine-approver-56656f9798-2sb8r\" (UID: \"405dce73-f4d5-4e66-8516-bece5511cc63\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2sb8r" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.232212 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f4d6fe9e-5990-4e8b-8b6f-efbac8600193-oauth-serving-cert\") pod \"console-f9d7485db-cb5r8\" (UID: \"f4d6fe9e-5990-4e8b-8b6f-efbac8600193\") " pod="openshift-console/console-f9d7485db-cb5r8" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.232237 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c9a9e90-0849-4fb8-be6b-3cbc35e1982c-proxy-tls\") pod \"machine-config-operator-74547568cd-5xjtp\" (UID: \"8c9a9e90-0849-4fb8-be6b-3cbc35e1982c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5xjtp" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.232275 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.232298 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c23c83e1-f20b-43ba-bdc8-29929236a384-metrics-certs\") pod \"router-default-5444994796-dv5m7\" (UID: \"c23c83e1-f20b-43ba-bdc8-29929236a384\") " pod="openshift-ingress/router-default-5444994796-dv5m7" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.232320 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c23c83e1-f20b-43ba-bdc8-29929236a384-default-certificate\") pod \"router-default-5444994796-dv5m7\" (UID: \"c23c83e1-f20b-43ba-bdc8-29929236a384\") " pod="openshift-ingress/router-default-5444994796-dv5m7" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.232339 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c23c83e1-f20b-43ba-bdc8-29929236a384-stats-auth\") pod \"router-default-5444994796-dv5m7\" (UID: \"c23c83e1-f20b-43ba-bdc8-29929236a384\") " pod="openshift-ingress/router-default-5444994796-dv5m7" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.234115 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75bd609c-9135-4d9a-b974-a1b026ac6598-registry-certificates\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.234323 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/405dce73-f4d5-4e66-8516-bece5511cc63-auth-proxy-config\") pod \"machine-approver-56656f9798-2sb8r\" (UID: \"405dce73-f4d5-4e66-8516-bece5511cc63\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2sb8r" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.235045 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c23c83e1-f20b-43ba-bdc8-29929236a384-service-ca-bundle\") pod \"router-default-5444994796-dv5m7\" (UID: \"c23c83e1-f20b-43ba-bdc8-29929236a384\") " pod="openshift-ingress/router-default-5444994796-dv5m7" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.235759 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-m8s4c"] Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.236834 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/405dce73-f4d5-4e66-8516-bece5511cc63-config\") pod \"machine-approver-56656f9798-2sb8r\" (UID: \"405dce73-f4d5-4e66-8516-bece5511cc63\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2sb8r" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.237198 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c23c83e1-f20b-43ba-bdc8-29929236a384-stats-auth\") pod \"router-default-5444994796-dv5m7\" (UID: \"c23c83e1-f20b-43ba-bdc8-29929236a384\") " pod="openshift-ingress/router-default-5444994796-dv5m7" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.237401 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/34f92d85-5b67-49b0-ac8c-2a16c55c7894-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9s5xj\" (UID: \"34f92d85-5b67-49b0-ac8c-2a16c55c7894\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9s5xj" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.237615 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/405dce73-f4d5-4e66-8516-bece5511cc63-machine-approver-tls\") pod \"machine-approver-56656f9798-2sb8r\" (UID: \"405dce73-f4d5-4e66-8516-bece5511cc63\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2sb8r" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.237816 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f4d6fe9e-5990-4e8b-8b6f-efbac8600193-console-config\") pod \"console-f9d7485db-cb5r8\" (UID: \"f4d6fe9e-5990-4e8b-8b6f-efbac8600193\") " pod="openshift-console/console-f9d7485db-cb5r8" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.238517 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efeb18fd-ff9f-4052-94d8-50d892b124b7-config\") pod \"etcd-operator-b45778765-79n6q\" (UID: \"efeb18fd-ff9f-4052-94d8-50d892b124b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-79n6q" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.239120 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/34f92d85-5b67-49b0-ac8c-2a16c55c7894-proxy-tls\") pod \"machine-config-controller-84d6567774-9s5xj\" (UID: \"34f92d85-5b67-49b0-ac8c-2a16c55c7894\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9s5xj" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.239316 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/12637361-4e28-43b0-9801-15ce0af1b647-node-bootstrap-token\") pod \"machine-config-server-w4g8h\" (UID: \"12637361-4e28-43b0-9801-15ce0af1b647\") " pod="openshift-machine-config-operator/machine-config-server-w4g8h" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.239589 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/efeb18fd-ff9f-4052-94d8-50d892b124b7-etcd-ca\") pod \"etcd-operator-b45778765-79n6q\" (UID: \"efeb18fd-ff9f-4052-94d8-50d892b124b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-79n6q" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.239716 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9cb8ff53-c9e8-4626-a77e-160660696fbc-etcd-serving-ca\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.239754 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9cb8ff53-c9e8-4626-a77e-160660696fbc-node-pullsecrets\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.240133 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75bd609c-9135-4d9a-b974-a1b026ac6598-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.240138 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dfd68f6-1819-4231-9f69-1fa39c594b27-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nmktk\" (UID: \"4dfd68f6-1819-4231-9f69-1fa39c594b27\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmktk" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.240483 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3bda6877-458b-4632-8677-481e0926441b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mkg6j\" (UID: \"3bda6877-458b-4632-8677-481e0926441b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mkg6j" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.240789 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cb8ff53-c9e8-4626-a77e-160660696fbc-config\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.241763 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cceb99fc-acfa-475b-b79c-6209f5040232-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tft7j\" (UID: \"cceb99fc-acfa-475b-b79c-6209f5040232\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tft7j" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.242294 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cb8ff53-c9e8-4626-a77e-160660696fbc-serving-cert\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.244049 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75bd609c-9135-4d9a-b974-a1b026ac6598-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.244540 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f4d6fe9e-5990-4e8b-8b6f-efbac8600193-service-ca\") pod \"console-f9d7485db-cb5r8\" (UID: \"f4d6fe9e-5990-4e8b-8b6f-efbac8600193\") " pod="openshift-console/console-f9d7485db-cb5r8" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.245488 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9cb8ff53-c9e8-4626-a77e-160660696fbc-encryption-config\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.245511 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75bd609c-9135-4d9a-b974-a1b026ac6598-trusted-ca\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.246145 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f4d6fe9e-5990-4e8b-8b6f-efbac8600193-console-serving-cert\") pod \"console-f9d7485db-cb5r8\" (UID: \"f4d6fe9e-5990-4e8b-8b6f-efbac8600193\") " pod="openshift-console/console-f9d7485db-cb5r8" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.246175 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9cb8ff53-c9e8-4626-a77e-160660696fbc-audit-dir\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.246359 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8c9a9e90-0849-4fb8-be6b-3cbc35e1982c-images\") pod \"machine-config-operator-74547568cd-5xjtp\" (UID: \"8c9a9e90-0849-4fb8-be6b-3cbc35e1982c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5xjtp" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.246372 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9cb8ff53-c9e8-4626-a77e-160660696fbc-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.246602 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f4d6fe9e-5990-4e8b-8b6f-efbac8600193-oauth-serving-cert\") pod \"console-f9d7485db-cb5r8\" (UID: \"f4d6fe9e-5990-4e8b-8b6f-efbac8600193\") " pod="openshift-console/console-f9d7485db-cb5r8" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.246969 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/efeb18fd-ff9f-4052-94d8-50d892b124b7-etcd-service-ca\") pod \"etcd-operator-b45778765-79n6q\" (UID: \"efeb18fd-ff9f-4052-94d8-50d892b124b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-79n6q" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.247038 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dabee30a-f36d-4123-87b9-71a576d3cc2a-config\") pod \"kube-apiserver-operator-766d6c64bb-x5flk\" (UID: \"dabee30a-f36d-4123-87b9-71a576d3cc2a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5flk" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.247159 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f1c99d97-783b-44bf-b113-d5e3ffbffd6d-srv-cert\") pod \"olm-operator-6b444d44fb-vbsd5\" (UID: \"f1c99d97-783b-44bf-b113-d5e3ffbffd6d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbsd5" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.247329 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9cb8ff53-c9e8-4626-a77e-160660696fbc-image-import-ca\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.247346 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/efeb18fd-ff9f-4052-94d8-50d892b124b7-etcd-client\") pod \"etcd-operator-b45778765-79n6q\" (UID: \"efeb18fd-ff9f-4052-94d8-50d892b124b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-79n6q" Feb 26 09:44:30 crc kubenswrapper[4760]: E0226 09:44:30.247886 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:30.747868574 +0000 UTC m=+115.881814147 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.248317 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d8fda26-daaf-42fe-9cb8-6057f9c7abb8-trusted-ca\") pod \"ingress-operator-5b745b69d9-mpttf\" (UID: \"0d8fda26-daaf-42fe-9cb8-6057f9c7abb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mpttf" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.248676 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cceb99fc-acfa-475b-b79c-6209f5040232-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tft7j\" (UID: \"cceb99fc-acfa-475b-b79c-6209f5040232\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tft7j" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.248890 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4d6fe9e-5990-4e8b-8b6f-efbac8600193-trusted-ca-bundle\") pod \"console-f9d7485db-cb5r8\" (UID: \"f4d6fe9e-5990-4e8b-8b6f-efbac8600193\") " pod="openshift-console/console-f9d7485db-cb5r8" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.249249 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9cb8ff53-c9e8-4626-a77e-160660696fbc-audit\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.249391 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c9a9e90-0849-4fb8-be6b-3cbc35e1982c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-5xjtp\" (UID: \"8c9a9e90-0849-4fb8-be6b-3cbc35e1982c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5xjtp" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.249511 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f4d6fe9e-5990-4e8b-8b6f-efbac8600193-console-oauth-config\") pod \"console-f9d7485db-cb5r8\" (UID: \"f4d6fe9e-5990-4e8b-8b6f-efbac8600193\") " pod="openshift-console/console-f9d7485db-cb5r8" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.254315 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dfd68f6-1819-4231-9f69-1fa39c594b27-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nmktk\" (UID: \"4dfd68f6-1819-4231-9f69-1fa39c594b27\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmktk" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.254713 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f1c99d97-783b-44bf-b113-d5e3ffbffd6d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vbsd5\" (UID: \"f1c99d97-783b-44bf-b113-d5e3ffbffd6d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbsd5" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.255029 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0726f0c9-0bc5-42b5-bb78-af77ad91ecbb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dlxqc\" (UID: \"0726f0c9-0bc5-42b5-bb78-af77ad91ecbb\") " pod="openshift-marketplace/marketplace-operator-79b997595-dlxqc" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.261381 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dabee30a-f36d-4123-87b9-71a576d3cc2a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-x5flk\" (UID: \"dabee30a-f36d-4123-87b9-71a576d3cc2a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5flk" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.262019 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4dc1b5d5-817c-44bd-a819-0d09cae65ce9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-c95n7\" (UID: \"4dc1b5d5-817c-44bd-a819-0d09cae65ce9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c95n7" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.261426 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c23c83e1-f20b-43ba-bdc8-29929236a384-metrics-certs\") pod \"router-default-5444994796-dv5m7\" (UID: \"c23c83e1-f20b-43ba-bdc8-29929236a384\") " pod="openshift-ingress/router-default-5444994796-dv5m7" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.261813 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0d8fda26-daaf-42fe-9cb8-6057f9c7abb8-metrics-tls\") pod \"ingress-operator-5b745b69d9-mpttf\" (UID: \"0d8fda26-daaf-42fe-9cb8-6057f9c7abb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mpttf" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.261876 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9cb8ff53-c9e8-4626-a77e-160660696fbc-etcd-client\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.261945 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/12637361-4e28-43b0-9801-15ce0af1b647-certs\") pod \"machine-config-server-w4g8h\" (UID: \"12637361-4e28-43b0-9801-15ce0af1b647\") " pod="openshift-machine-config-operator/machine-config-server-w4g8h" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.261975 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/708d73ab-ebcd-4477-becc-dae46b14c8af-metrics-tls\") pod \"dns-operator-744455d44c-nm4ph\" (UID: \"708d73ab-ebcd-4477-becc-dae46b14c8af\") " pod="openshift-dns-operator/dns-operator-744455d44c-nm4ph" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.261385 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c9a9e90-0849-4fb8-be6b-3cbc35e1982c-proxy-tls\") pod \"machine-config-operator-74547568cd-5xjtp\" (UID: \"8c9a9e90-0849-4fb8-be6b-3cbc35e1982c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5xjtp" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.262147 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efeb18fd-ff9f-4052-94d8-50d892b124b7-serving-cert\") pod \"etcd-operator-b45778765-79n6q\" (UID: \"efeb18fd-ff9f-4052-94d8-50d892b124b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-79n6q" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.262442 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75bd609c-9135-4d9a-b974-a1b026ac6598-registry-tls\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.262797 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c23c83e1-f20b-43ba-bdc8-29929236a384-default-certificate\") pod \"router-default-5444994796-dv5m7\" (UID: \"c23c83e1-f20b-43ba-bdc8-29929236a384\") " pod="openshift-ingress/router-default-5444994796-dv5m7" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.262817 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0726f0c9-0bc5-42b5-bb78-af77ad91ecbb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dlxqc\" (UID: \"0726f0c9-0bc5-42b5-bb78-af77ad91ecbb\") " pod="openshift-marketplace/marketplace-operator-79b997595-dlxqc" Feb 26 09:44:30 crc kubenswrapper[4760]: W0226 09:44:30.263204 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d1c8d0d_900e_4dd0_a880_1c6889483328.slice/crio-2afa49461f9dc4979a3218099f7cb7108197df8115c3f5e7f9c3608b32316376 WatchSource:0}: Error finding container 2afa49461f9dc4979a3218099f7cb7108197df8115c3f5e7f9c3608b32316376: Status 404 returned error can't find the container with id 2afa49461f9dc4979a3218099f7cb7108197df8115c3f5e7f9c3608b32316376 Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.274601 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lszhh\" (UniqueName: \"kubernetes.io/projected/3bda6877-458b-4632-8677-481e0926441b-kube-api-access-lszhh\") pod \"multus-admission-controller-857f4d67dd-mkg6j\" (UID: \"3bda6877-458b-4632-8677-481e0926441b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mkg6j" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.286876 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9zf8"] Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.296012 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgqwr\" (UniqueName: \"kubernetes.io/projected/c23c83e1-f20b-43ba-bdc8-29929236a384-kube-api-access-pgqwr\") pod \"router-default-5444994796-dv5m7\" (UID: \"c23c83e1-f20b-43ba-bdc8-29929236a384\") " pod="openshift-ingress/router-default-5444994796-dv5m7" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.316209 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lq2f\" (UniqueName: \"kubernetes.io/projected/4dfd68f6-1819-4231-9f69-1fa39c594b27-kube-api-access-4lq2f\") pod \"kube-storage-version-migrator-operator-b67b599dd-nmktk\" (UID: \"4dfd68f6-1819-4231-9f69-1fa39c594b27\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmktk" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.319386 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-mkg6j" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.335085 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.335555 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cm2p\" (UniqueName: \"kubernetes.io/projected/28cf1469-8d38-4d73-ab81-8e7a3eb86314-kube-api-access-5cm2p\") pod \"service-ca-9c57cc56f-44s9q\" (UID: \"28cf1469-8d38-4d73-ab81-8e7a3eb86314\") " pod="openshift-service-ca/service-ca-9c57cc56f-44s9q" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.335631 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/53613d0e-5df3-4b18-8ebd-eb64ad64d487-socket-dir\") pod \"csi-hostpathplugin-q4p5w\" (UID: \"53613d0e-5df3-4b18-8ebd-eb64ad64d487\") " pod="hostpath-provisioner/csi-hostpathplugin-q4p5w" Feb 26 09:44:30 crc kubenswrapper[4760]: E0226 09:44:30.335684 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:30.835627603 +0000 UTC m=+115.969573216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.335798 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjbfs\" (UniqueName: \"kubernetes.io/projected/26ffe756-78b8-4546-9587-9d031709ba56-kube-api-access-rjbfs\") pod \"package-server-manager-789f6589d5-nnr4g\" (UID: \"26ffe756-78b8-4546-9587-9d031709ba56\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnr4g" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.335875 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp7bm\" (UniqueName: \"kubernetes.io/projected/cd389c74-3cf0-4a69-936d-ce93a26d2328-kube-api-access-lp7bm\") pod \"service-ca-operator-777779d784-l4drh\" (UID: \"cd389c74-3cf0-4a69-936d-ce93a26d2328\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l4drh" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.335940 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/71578c2a-f1cc-458d-9f95-058597d6a4b3-profile-collector-cert\") pod \"catalog-operator-68c6474976-zr9kg\" (UID: \"71578c2a-f1cc-458d-9f95-058597d6a4b3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zr9kg" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.336013 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/26ffe756-78b8-4546-9587-9d031709ba56-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nnr4g\" (UID: \"26ffe756-78b8-4546-9587-9d031709ba56\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnr4g" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.336078 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/53613d0e-5df3-4b18-8ebd-eb64ad64d487-socket-dir\") pod \"csi-hostpathplugin-q4p5w\" (UID: \"53613d0e-5df3-4b18-8ebd-eb64ad64d487\") " pod="hostpath-provisioner/csi-hostpathplugin-q4p5w" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.336092 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd389c74-3cf0-4a69-936d-ce93a26d2328-serving-cert\") pod \"service-ca-operator-777779d784-l4drh\" (UID: \"cd389c74-3cf0-4a69-936d-ce93a26d2328\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l4drh" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.336208 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2105337b-ddda-4a9a-bbd8-9442b17eedf5-cert\") pod \"ingress-canary-kmqvg\" (UID: \"2105337b-ddda-4a9a-bbd8-9442b17eedf5\") " pod="openshift-ingress-canary/ingress-canary-kmqvg" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.336278 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b0e52d65-bfe6-4f19-a0a1-2cdf4bf69405-metrics-tls\") pod \"dns-default-hczgn\" (UID: \"b0e52d65-bfe6-4f19-a0a1-2cdf4bf69405\") " pod="openshift-dns/dns-default-hczgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.336309 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/71578c2a-f1cc-458d-9f95-058597d6a4b3-srv-cert\") pod \"catalog-operator-68c6474976-zr9kg\" (UID: \"71578c2a-f1cc-458d-9f95-058597d6a4b3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zr9kg" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.336391 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ss4h\" (UniqueName: \"kubernetes.io/projected/c388b29a-9aad-47a6-ba5d-8eabdb4480a6-kube-api-access-2ss4h\") pod \"collect-profiles-29534970-r2bbh\" (UID: \"c388b29a-9aad-47a6-ba5d-8eabdb4480a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-r2bbh" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.336455 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/53613d0e-5df3-4b18-8ebd-eb64ad64d487-mountpoint-dir\") pod \"csi-hostpathplugin-q4p5w\" (UID: \"53613d0e-5df3-4b18-8ebd-eb64ad64d487\") " pod="hostpath-provisioner/csi-hostpathplugin-q4p5w" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.336521 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhwqx\" (UniqueName: \"kubernetes.io/projected/913c298a-1dbe-440a-afb0-3ba32cf96a8c-kube-api-access-zhwqx\") pod \"packageserver-d55dfcdfc-9fdhq\" (UID: \"913c298a-1dbe-440a-afb0-3ba32cf96a8c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9fdhq" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.336544 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c388b29a-9aad-47a6-ba5d-8eabdb4480a6-config-volume\") pod \"collect-profiles-29534970-r2bbh\" (UID: \"c388b29a-9aad-47a6-ba5d-8eabdb4480a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-r2bbh" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.336626 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/53613d0e-5df3-4b18-8ebd-eb64ad64d487-csi-data-dir\") pod \"csi-hostpathplugin-q4p5w\" (UID: \"53613d0e-5df3-4b18-8ebd-eb64ad64d487\") " pod="hostpath-provisioner/csi-hostpathplugin-q4p5w" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.336689 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/913c298a-1dbe-440a-afb0-3ba32cf96a8c-webhook-cert\") pod \"packageserver-d55dfcdfc-9fdhq\" (UID: \"913c298a-1dbe-440a-afb0-3ba32cf96a8c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9fdhq" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.336712 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c388b29a-9aad-47a6-ba5d-8eabdb4480a6-secret-volume\") pod \"collect-profiles-29534970-r2bbh\" (UID: \"c388b29a-9aad-47a6-ba5d-8eabdb4480a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-r2bbh" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.336773 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/53613d0e-5df3-4b18-8ebd-eb64ad64d487-plugins-dir\") pod \"csi-hostpathplugin-q4p5w\" (UID: \"53613d0e-5df3-4b18-8ebd-eb64ad64d487\") " pod="hostpath-provisioner/csi-hostpathplugin-q4p5w" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.336804 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/28cf1469-8d38-4d73-ab81-8e7a3eb86314-signing-cabundle\") pod \"service-ca-9c57cc56f-44s9q\" (UID: \"28cf1469-8d38-4d73-ab81-8e7a3eb86314\") " pod="openshift-service-ca/service-ca-9c57cc56f-44s9q" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.336887 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/913c298a-1dbe-440a-afb0-3ba32cf96a8c-apiservice-cert\") pod \"packageserver-d55dfcdfc-9fdhq\" (UID: \"913c298a-1dbe-440a-afb0-3ba32cf96a8c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9fdhq" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.337254 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/28cf1469-8d38-4d73-ab81-8e7a3eb86314-signing-key\") pod \"service-ca-9c57cc56f-44s9q\" (UID: \"28cf1469-8d38-4d73-ab81-8e7a3eb86314\") " pod="openshift-service-ca/service-ca-9c57cc56f-44s9q" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.337331 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fstb4\" (UniqueName: \"kubernetes.io/projected/71578c2a-f1cc-458d-9f95-058597d6a4b3-kube-api-access-fstb4\") pod \"catalog-operator-68c6474976-zr9kg\" (UID: \"71578c2a-f1cc-458d-9f95-058597d6a4b3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zr9kg" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.337403 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvcv6\" (UniqueName: \"kubernetes.io/projected/b0e52d65-bfe6-4f19-a0a1-2cdf4bf69405-kube-api-access-jvcv6\") pod \"dns-default-hczgn\" (UID: \"b0e52d65-bfe6-4f19-a0a1-2cdf4bf69405\") " pod="openshift-dns/dns-default-hczgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.337478 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/53613d0e-5df3-4b18-8ebd-eb64ad64d487-registration-dir\") pod \"csi-hostpathplugin-q4p5w\" (UID: \"53613d0e-5df3-4b18-8ebd-eb64ad64d487\") " pod="hostpath-provisioner/csi-hostpathplugin-q4p5w" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.337516 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0e52d65-bfe6-4f19-a0a1-2cdf4bf69405-config-volume\") pod \"dns-default-hczgn\" (UID: \"b0e52d65-bfe6-4f19-a0a1-2cdf4bf69405\") " pod="openshift-dns/dns-default-hczgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.337561 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/cd519bc0-6b98-495a-bc74-e515b87ec6c1-ready\") pod \"cni-sysctl-allowlist-ds-dpdz4\" (UID: \"cd519bc0-6b98-495a-bc74-e515b87ec6c1\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.337691 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/913c298a-1dbe-440a-afb0-3ba32cf96a8c-tmpfs\") pod \"packageserver-d55dfcdfc-9fdhq\" (UID: \"913c298a-1dbe-440a-afb0-3ba32cf96a8c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9fdhq" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.337796 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.337866 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84tvs\" (UniqueName: \"kubernetes.io/projected/cd519bc0-6b98-495a-bc74-e515b87ec6c1-kube-api-access-84tvs\") pod \"cni-sysctl-allowlist-ds-dpdz4\" (UID: \"cd519bc0-6b98-495a-bc74-e515b87ec6c1\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.337898 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cd519bc0-6b98-495a-bc74-e515b87ec6c1-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-dpdz4\" (UID: \"cd519bc0-6b98-495a-bc74-e515b87ec6c1\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.337983 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftx9t\" (UniqueName: \"kubernetes.io/projected/2105337b-ddda-4a9a-bbd8-9442b17eedf5-kube-api-access-ftx9t\") pod \"ingress-canary-kmqvg\" (UID: \"2105337b-ddda-4a9a-bbd8-9442b17eedf5\") " pod="openshift-ingress-canary/ingress-canary-kmqvg" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.338044 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd389c74-3cf0-4a69-936d-ce93a26d2328-config\") pod \"service-ca-operator-777779d784-l4drh\" (UID: \"cd389c74-3cf0-4a69-936d-ce93a26d2328\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l4drh" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.338100 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cd519bc0-6b98-495a-bc74-e515b87ec6c1-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-dpdz4\" (UID: \"cd519bc0-6b98-495a-bc74-e515b87ec6c1\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.338141 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2dst\" (UniqueName: \"kubernetes.io/projected/53613d0e-5df3-4b18-8ebd-eb64ad64d487-kube-api-access-b2dst\") pod \"csi-hostpathplugin-q4p5w\" (UID: \"53613d0e-5df3-4b18-8ebd-eb64ad64d487\") " pod="hostpath-provisioner/csi-hostpathplugin-q4p5w" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.338550 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/53613d0e-5df3-4b18-8ebd-eb64ad64d487-registration-dir\") pod \"csi-hostpathplugin-q4p5w\" (UID: \"53613d0e-5df3-4b18-8ebd-eb64ad64d487\") " pod="hostpath-provisioner/csi-hostpathplugin-q4p5w" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.339242 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/53613d0e-5df3-4b18-8ebd-eb64ad64d487-plugins-dir\") pod \"csi-hostpathplugin-q4p5w\" (UID: \"53613d0e-5df3-4b18-8ebd-eb64ad64d487\") " pod="hostpath-provisioner/csi-hostpathplugin-q4p5w" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.339971 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/53613d0e-5df3-4b18-8ebd-eb64ad64d487-mountpoint-dir\") pod \"csi-hostpathplugin-q4p5w\" (UID: \"53613d0e-5df3-4b18-8ebd-eb64ad64d487\") " pod="hostpath-provisioner/csi-hostpathplugin-q4p5w" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.340055 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rhxd\" (UniqueName: \"kubernetes.io/projected/708d73ab-ebcd-4477-becc-dae46b14c8af-kube-api-access-6rhxd\") pod \"dns-operator-744455d44c-nm4ph\" (UID: \"708d73ab-ebcd-4477-becc-dae46b14c8af\") " pod="openshift-dns-operator/dns-operator-744455d44c-nm4ph" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.340126 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/53613d0e-5df3-4b18-8ebd-eb64ad64d487-csi-data-dir\") pod \"csi-hostpathplugin-q4p5w\" (UID: \"53613d0e-5df3-4b18-8ebd-eb64ad64d487\") " pod="hostpath-provisioner/csi-hostpathplugin-q4p5w" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.340209 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cd519bc0-6b98-495a-bc74-e515b87ec6c1-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-dpdz4\" (UID: \"cd519bc0-6b98-495a-bc74-e515b87ec6c1\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" Feb 26 09:44:30 crc kubenswrapper[4760]: E0226 09:44:30.340648 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:30.840563963 +0000 UTC m=+115.974509516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.341927 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/28cf1469-8d38-4d73-ab81-8e7a3eb86314-signing-cabundle\") pod \"service-ca-9c57cc56f-44s9q\" (UID: \"28cf1469-8d38-4d73-ab81-8e7a3eb86314\") " pod="openshift-service-ca/service-ca-9c57cc56f-44s9q" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.342064 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c388b29a-9aad-47a6-ba5d-8eabdb4480a6-config-volume\") pod \"collect-profiles-29534970-r2bbh\" (UID: \"c388b29a-9aad-47a6-ba5d-8eabdb4480a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-r2bbh" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.342377 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/cd519bc0-6b98-495a-bc74-e515b87ec6c1-ready\") pod \"cni-sysctl-allowlist-ds-dpdz4\" (UID: \"cd519bc0-6b98-495a-bc74-e515b87ec6c1\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.342603 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/71578c2a-f1cc-458d-9f95-058597d6a4b3-srv-cert\") pod \"catalog-operator-68c6474976-zr9kg\" (UID: \"71578c2a-f1cc-458d-9f95-058597d6a4b3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zr9kg" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.342761 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/28cf1469-8d38-4d73-ab81-8e7a3eb86314-signing-key\") pod \"service-ca-9c57cc56f-44s9q\" (UID: \"28cf1469-8d38-4d73-ab81-8e7a3eb86314\") " pod="openshift-service-ca/service-ca-9c57cc56f-44s9q" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.342780 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd389c74-3cf0-4a69-936d-ce93a26d2328-config\") pod \"service-ca-operator-777779d784-l4drh\" (UID: \"cd389c74-3cf0-4a69-936d-ce93a26d2328\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l4drh" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.342863 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/71578c2a-f1cc-458d-9f95-058597d6a4b3-profile-collector-cert\") pod \"catalog-operator-68c6474976-zr9kg\" (UID: \"71578c2a-f1cc-458d-9f95-058597d6a4b3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zr9kg" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.343211 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0e52d65-bfe6-4f19-a0a1-2cdf4bf69405-config-volume\") pod \"dns-default-hczgn\" (UID: \"b0e52d65-bfe6-4f19-a0a1-2cdf4bf69405\") " pod="openshift-dns/dns-default-hczgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.344640 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cd519bc0-6b98-495a-bc74-e515b87ec6c1-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-dpdz4\" (UID: \"cd519bc0-6b98-495a-bc74-e515b87ec6c1\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.344652 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/913c298a-1dbe-440a-afb0-3ba32cf96a8c-tmpfs\") pod \"packageserver-d55dfcdfc-9fdhq\" (UID: \"913c298a-1dbe-440a-afb0-3ba32cf96a8c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9fdhq" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.344791 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/26ffe756-78b8-4546-9587-9d031709ba56-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nnr4g\" (UID: \"26ffe756-78b8-4546-9587-9d031709ba56\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnr4g" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.345388 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b0e52d65-bfe6-4f19-a0a1-2cdf4bf69405-metrics-tls\") pod \"dns-default-hczgn\" (UID: \"b0e52d65-bfe6-4f19-a0a1-2cdf4bf69405\") " pod="openshift-dns/dns-default-hczgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.346310 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd389c74-3cf0-4a69-936d-ce93a26d2328-serving-cert\") pod \"service-ca-operator-777779d784-l4drh\" (UID: \"cd389c74-3cf0-4a69-936d-ce93a26d2328\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l4drh" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.346565 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c388b29a-9aad-47a6-ba5d-8eabdb4480a6-secret-volume\") pod \"collect-profiles-29534970-r2bbh\" (UID: \"c388b29a-9aad-47a6-ba5d-8eabdb4480a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-r2bbh" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.346872 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2105337b-ddda-4a9a-bbd8-9442b17eedf5-cert\") pod \"ingress-canary-kmqvg\" (UID: \"2105337b-ddda-4a9a-bbd8-9442b17eedf5\") " pod="openshift-ingress-canary/ingress-canary-kmqvg" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.347385 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/913c298a-1dbe-440a-afb0-3ba32cf96a8c-apiservice-cert\") pod \"packageserver-d55dfcdfc-9fdhq\" (UID: \"913c298a-1dbe-440a-afb0-3ba32cf96a8c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9fdhq" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.348195 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/913c298a-1dbe-440a-afb0-3ba32cf96a8c-webhook-cert\") pod \"packageserver-d55dfcdfc-9fdhq\" (UID: \"913c298a-1dbe-440a-afb0-3ba32cf96a8c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9fdhq" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.349937 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dv5m7" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.353473 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7zl2\" (UniqueName: \"kubernetes.io/projected/12637361-4e28-43b0-9801-15ce0af1b647-kube-api-access-x7zl2\") pod \"machine-config-server-w4g8h\" (UID: \"12637361-4e28-43b0-9801-15ce0af1b647\") " pod="openshift-machine-config-operator/machine-config-server-w4g8h" Feb 26 09:44:30 crc kubenswrapper[4760]: W0226 09:44:30.367349 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc23c83e1_f20b_43ba_bdc8_29929236a384.slice/crio-b95ce1c1f6f3f21514a1d2e45685acf8e546e4ba24fc4a6c62ca3ad90797b24e WatchSource:0}: Error finding container b95ce1c1f6f3f21514a1d2e45685acf8e546e4ba24fc4a6c62ca3ad90797b24e: Status 404 returned error can't find the container with id b95ce1c1f6f3f21514a1d2e45685acf8e546e4ba24fc4a6c62ca3ad90797b24e Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.370639 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmktk" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.372592 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzw9k\" (UniqueName: \"kubernetes.io/projected/0726f0c9-0bc5-42b5-bb78-af77ad91ecbb-kube-api-access-lzw9k\") pod \"marketplace-operator-79b997595-dlxqc\" (UID: \"0726f0c9-0bc5-42b5-bb78-af77ad91ecbb\") " pod="openshift-marketplace/marketplace-operator-79b997595-dlxqc" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.397819 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cceb99fc-acfa-475b-b79c-6209f5040232-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tft7j\" (UID: \"cceb99fc-acfa-475b-b79c-6209f5040232\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tft7j" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.403922 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dlxqc" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.413455 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-w4g8h" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.419328 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkt7p\" (UniqueName: \"kubernetes.io/projected/efeb18fd-ff9f-4052-94d8-50d892b124b7-kube-api-access-fkt7p\") pod \"etcd-operator-b45778765-79n6q\" (UID: \"efeb18fd-ff9f-4052-94d8-50d892b124b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-79n6q" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.433870 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m4nc\" (UniqueName: \"kubernetes.io/projected/8c9a9e90-0849-4fb8-be6b-3cbc35e1982c-kube-api-access-4m4nc\") pod \"machine-config-operator-74547568cd-5xjtp\" (UID: \"8c9a9e90-0849-4fb8-be6b-3cbc35e1982c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5xjtp" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.438835 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:30 crc kubenswrapper[4760]: E0226 09:44:30.439020 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:30.938979756 +0000 UTC m=+116.072925249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.439167 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:30 crc kubenswrapper[4760]: E0226 09:44:30.439629 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:30.939619784 +0000 UTC m=+116.073565277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.480889 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77q7r\" (UniqueName: \"kubernetes.io/projected/3b4ba74c-b04c-4def-be1a-4e1304730727-kube-api-access-77q7r\") pod \"migrator-59844c95c7-zgghc\" (UID: \"3b4ba74c-b04c-4def-be1a-4e1304730727\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zgghc" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.496093 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75bd609c-9135-4d9a-b974-a1b026ac6598-bound-sa-token\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.534859 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45j9l\" (UniqueName: \"kubernetes.io/projected/4dc1b5d5-817c-44bd-a819-0d09cae65ce9-kube-api-access-45j9l\") pod \"control-plane-machine-set-operator-78cbb6b69f-c95n7\" (UID: \"4dc1b5d5-817c-44bd-a819-0d09cae65ce9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c95n7" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.541131 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:30 crc kubenswrapper[4760]: E0226 09:44:30.541959 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:31.041931218 +0000 UTC m=+116.175876711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:30 crc kubenswrapper[4760]: W0226 09:44:30.542112 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12637361_4e28_43b0_9801_15ce0af1b647.slice/crio-d8083641f8133491d031359f0ac5324c53d27292ca9440e8538af50ae261d76b WatchSource:0}: Error finding container d8083641f8133491d031359f0ac5324c53d27292ca9440e8538af50ae261d76b: Status 404 returned error can't find the container with id d8083641f8133491d031359f0ac5324c53d27292ca9440e8538af50ae261d76b Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.552873 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0d8fda26-daaf-42fe-9cb8-6057f9c7abb8-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mpttf\" (UID: \"0d8fda26-daaf-42fe-9cb8-6057f9c7abb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mpttf" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.558429 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mkg6j"] Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.559388 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps9vq\" (UniqueName: \"kubernetes.io/projected/405dce73-f4d5-4e66-8516-bece5511cc63-kube-api-access-ps9vq\") pod \"machine-approver-56656f9798-2sb8r\" (UID: \"405dce73-f4d5-4e66-8516-bece5511cc63\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2sb8r" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.568988 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-79n6q" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.574886 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znqxh\" (UniqueName: \"kubernetes.io/projected/9cb8ff53-c9e8-4626-a77e-160660696fbc-kube-api-access-znqxh\") pod \"apiserver-76f77b778f-hczkw\" (UID: \"9cb8ff53-c9e8-4626-a77e-160660696fbc\") " pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.577647 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2sb8r" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.586820 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-nm4ph" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.598169 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zwgc\" (UniqueName: \"kubernetes.io/projected/34f92d85-5b67-49b0-ac8c-2a16c55c7894-kube-api-access-7zwgc\") pod \"machine-config-controller-84d6567774-9s5xj\" (UID: \"34f92d85-5b67-49b0-ac8c-2a16c55c7894\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9s5xj" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.604298 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zgghc" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.615240 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt288\" (UniqueName: \"kubernetes.io/projected/75bd609c-9135-4d9a-b974-a1b026ac6598-kube-api-access-wt288\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.634601 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c95n7" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.642991 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tft7j" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.643788 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:30 crc kubenswrapper[4760]: E0226 09:44:30.644307 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:31.144287442 +0000 UTC m=+116.278232935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.644534 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vxrs\" (UniqueName: \"kubernetes.io/projected/f1c99d97-783b-44bf-b113-d5e3ffbffd6d-kube-api-access-5vxrs\") pod \"olm-operator-6b444d44fb-vbsd5\" (UID: \"f1c99d97-783b-44bf-b113-d5e3ffbffd6d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbsd5" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.656509 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmktk"] Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.656583 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kncl2\" (UniqueName: \"kubernetes.io/projected/f4d6fe9e-5990-4e8b-8b6f-efbac8600193-kube-api-access-kncl2\") pod \"console-f9d7485db-cb5r8\" (UID: \"f4d6fe9e-5990-4e8b-8b6f-efbac8600193\") " pod="openshift-console/console-f9d7485db-cb5r8" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.671947 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9s5xj" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.676520 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dabee30a-f36d-4123-87b9-71a576d3cc2a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-x5flk\" (UID: \"dabee30a-f36d-4123-87b9-71a576d3cc2a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5flk" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.680867 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbsd5" Feb 26 09:44:30 crc kubenswrapper[4760]: W0226 09:44:30.686225 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod405dce73_f4d5_4e66_8516_bece5511cc63.slice/crio-29d2386e018c94195beabb1dbd3ffdb4c4bba5745ea3bba9c112df80519b7a10 WatchSource:0}: Error finding container 29d2386e018c94195beabb1dbd3ffdb4c4bba5745ea3bba9c112df80519b7a10: Status 404 returned error can't find the container with id 29d2386e018c94195beabb1dbd3ffdb4c4bba5745ea3bba9c112df80519b7a10 Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.687119 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5xjtp" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.699144 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b29g\" (UniqueName: \"kubernetes.io/projected/0d8fda26-daaf-42fe-9cb8-6057f9c7abb8-kube-api-access-6b29g\") pod \"ingress-operator-5b745b69d9-mpttf\" (UID: \"0d8fda26-daaf-42fe-9cb8-6057f9c7abb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mpttf" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.708129 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dlxqc"] Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.720306 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cm2p\" (UniqueName: \"kubernetes.io/projected/28cf1469-8d38-4d73-ab81-8e7a3eb86314-kube-api-access-5cm2p\") pod \"service-ca-9c57cc56f-44s9q\" (UID: \"28cf1469-8d38-4d73-ab81-8e7a3eb86314\") " pod="openshift-service-ca/service-ca-9c57cc56f-44s9q" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.731770 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-44s9q" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.733831 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjbfs\" (UniqueName: \"kubernetes.io/projected/26ffe756-78b8-4546-9587-9d031709ba56-kube-api-access-rjbfs\") pod \"package-server-manager-789f6589d5-nnr4g\" (UID: \"26ffe756-78b8-4546-9587-9d031709ba56\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnr4g" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.744560 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:30 crc kubenswrapper[4760]: E0226 09:44:30.745059 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:31.245045031 +0000 UTC m=+116.378990524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.754726 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp7bm\" (UniqueName: \"kubernetes.io/projected/cd389c74-3cf0-4a69-936d-ce93a26d2328-kube-api-access-lp7bm\") pod \"service-ca-operator-777779d784-l4drh\" (UID: \"cd389c74-3cf0-4a69-936d-ce93a26d2328\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l4drh" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.779339 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvcv6\" (UniqueName: \"kubernetes.io/projected/b0e52d65-bfe6-4f19-a0a1-2cdf4bf69405-kube-api-access-jvcv6\") pod \"dns-default-hczgn\" (UID: \"b0e52d65-bfe6-4f19-a0a1-2cdf4bf69405\") " pod="openshift-dns/dns-default-hczgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.783991 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hczgn" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.805304 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhwqx\" (UniqueName: \"kubernetes.io/projected/913c298a-1dbe-440a-afb0-3ba32cf96a8c-kube-api-access-zhwqx\") pod \"packageserver-d55dfcdfc-9fdhq\" (UID: \"913c298a-1dbe-440a-afb0-3ba32cf96a8c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9fdhq" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.811506 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-79n6q"] Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.841100 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2dst\" (UniqueName: \"kubernetes.io/projected/53613d0e-5df3-4b18-8ebd-eb64ad64d487-kube-api-access-b2dst\") pod \"csi-hostpathplugin-q4p5w\" (UID: \"53613d0e-5df3-4b18-8ebd-eb64ad64d487\") " pod="hostpath-provisioner/csi-hostpathplugin-q4p5w" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.863983 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:30 crc kubenswrapper[4760]: E0226 09:44:30.864484 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:31.364463992 +0000 UTC m=+116.498409485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.868290 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.876768 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fstb4\" (UniqueName: \"kubernetes.io/projected/71578c2a-f1cc-458d-9f95-058597d6a4b3-kube-api-access-fstb4\") pod \"catalog-operator-68c6474976-zr9kg\" (UID: \"71578c2a-f1cc-458d-9f95-058597d6a4b3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zr9kg" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.890134 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cb5r8" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.891223 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ss4h\" (UniqueName: \"kubernetes.io/projected/c388b29a-9aad-47a6-ba5d-8eabdb4480a6-kube-api-access-2ss4h\") pod \"collect-profiles-29534970-r2bbh\" (UID: \"c388b29a-9aad-47a6-ba5d-8eabdb4480a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-r2bbh" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.899561 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mpttf" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.899985 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84tvs\" (UniqueName: \"kubernetes.io/projected/cd519bc0-6b98-495a-bc74-e515b87ec6c1-kube-api-access-84tvs\") pod \"cni-sysctl-allowlist-ds-dpdz4\" (UID: \"cd519bc0-6b98-495a-bc74-e515b87ec6c1\") " pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.926637 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftx9t\" (UniqueName: \"kubernetes.io/projected/2105337b-ddda-4a9a-bbd8-9442b17eedf5-kube-api-access-ftx9t\") pod \"ingress-canary-kmqvg\" (UID: \"2105337b-ddda-4a9a-bbd8-9442b17eedf5\") " pod="openshift-ingress-canary/ingress-canary-kmqvg" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.932403 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-nm4ph"] Feb 26 09:44:30 crc kubenswrapper[4760]: W0226 09:44:30.958823 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefeb18fd_ff9f_4052_94d8_50d892b124b7.slice/crio-95b758d0df632ff0902bd31161096d62f990316429f75ce81a1858e529b3cec9 WatchSource:0}: Error finding container 95b758d0df632ff0902bd31161096d62f990316429f75ce81a1858e529b3cec9: Status 404 returned error can't find the container with id 95b758d0df632ff0902bd31161096d62f990316429f75ce81a1858e529b3cec9 Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.960109 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-zgghc"] Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.960307 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5flk" Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.965770 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:30 crc kubenswrapper[4760]: E0226 09:44:30.966096 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:31.466081946 +0000 UTC m=+116.600027439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:30 crc kubenswrapper[4760]: I0226 09:44:30.995321 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zr9kg" Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.020657 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnr4g" Feb 26 09:44:31 crc kubenswrapper[4760]: W0226 09:44:31.023008 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b4ba74c_b04c_4def_be1a_4e1304730727.slice/crio-e33d142b85ed79c2a94afe0c98251a5cb795a6b71925472f21f7a170b5675df0 WatchSource:0}: Error finding container e33d142b85ed79c2a94afe0c98251a5cb795a6b71925472f21f7a170b5675df0: Status 404 returned error can't find the container with id e33d142b85ed79c2a94afe0c98251a5cb795a6b71925472f21f7a170b5675df0 Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.033129 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l4drh" Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.048879 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-r2bbh" Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.051192 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9fdhq" Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.058510 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kmqvg" Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.068280 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/54d8e12b-f9b5-4c44-857a-582a2d507728-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.068315 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.068384 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/54d8e12b-f9b5-4c44-857a-582a2d507728-encryption-config\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:31 crc kubenswrapper[4760]: E0226 09:44:31.069303 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:31.569287425 +0000 UTC m=+116.703232918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.069823 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/54d8e12b-f9b5-4c44-857a-582a2d507728-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.075085 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-q4p5w" Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.075948 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/54d8e12b-f9b5-4c44-857a-582a2d507728-encryption-config\") pod \"apiserver-7bbb656c7d-njc94\" (UID: \"54d8e12b-f9b5-4c44-857a-582a2d507728\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.088569 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.094018 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-w4g8h" event={"ID":"12637361-4e28-43b0-9801-15ce0af1b647","Type":"ContainerStarted","Data":"b645d4276d0b1b22a541a11bb691db66f713f799ad9dd3ede7106c0ba4cbeca5"} Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.094061 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-w4g8h" event={"ID":"12637361-4e28-43b0-9801-15ce0af1b647","Type":"ContainerStarted","Data":"d8083641f8133491d031359f0ac5324c53d27292ca9440e8538af50ae261d76b"} Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.109265 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-lhclv" event={"ID":"181dda13-0878-45ce-8585-e1799db10957","Type":"ContainerStarted","Data":"138a9d6c9247c7997db2fc1b972ffe2a6d7b8adae40fa90c582c73f9629e826c"} Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.110860 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9zf8" event={"ID":"af3d7b95-7fb4-4343-a019-1f30b1c65b28","Type":"ContainerStarted","Data":"4a45e7322effc404c1f78cbc6b252f2e216396b38eecd8ba0c9effabe7b2c880"} Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.110906 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9zf8" event={"ID":"af3d7b95-7fb4-4343-a019-1f30b1c65b28","Type":"ContainerStarted","Data":"f4d96eb71e49867fb2875e7a92c11ebf2042f851142e127e302cfe9fbaddfd92"} Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.114260 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-79n6q" event={"ID":"efeb18fd-ff9f-4052-94d8-50d892b124b7","Type":"ContainerStarted","Data":"95b758d0df632ff0902bd31161096d62f990316429f75ce81a1858e529b3cec9"} Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.115814 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq" event={"ID":"dcef4e8d-f319-4f69-8795-3102aebecd9c","Type":"ContainerStarted","Data":"a8a312468a0af8401f1680f14f12bd074ded96d26d970315fbd26cbb923812c4"} Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.116104 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq" Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.117606 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" event={"ID":"e2b4386d-728b-43e0-83e7-030a977d88dd","Type":"ContainerStarted","Data":"785f9a550d2d35149d52d4e37a5d639abfd426238f120f013bcc1cd37453ce61"} Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.117803 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.119085 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zgghc" event={"ID":"3b4ba74c-b04c-4def-be1a-4e1304730727","Type":"ContainerStarted","Data":"e33d142b85ed79c2a94afe0c98251a5cb795a6b71925472f21f7a170b5675df0"} Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.123460 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z9dvk" event={"ID":"fa1b34f5-88f0-49e2-be26-82e6e6ecf4e6","Type":"ContainerStarted","Data":"6743e2cf6e6117ca0f735cb919cda30cefc311adc3b46889b9ce54a8b07bc49c"} Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.123494 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z9dvk" event={"ID":"fa1b34f5-88f0-49e2-be26-82e6e6ecf4e6","Type":"ContainerStarted","Data":"c57687add440e9813c7f94d0b0c00fa7275b18aeb64345d68bcc0ae58de23e85"} Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.130406 4760 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-2tqr5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.130462 4760 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-zhxnq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.130517 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq" podUID="dcef4e8d-f319-4f69-8795-3102aebecd9c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.130466 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" podUID="e2b4386d-728b-43e0-83e7-030a977d88dd" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.132043 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2sb8r" event={"ID":"405dce73-f4d5-4e66-8516-bece5511cc63","Type":"ContainerStarted","Data":"29d2386e018c94195beabb1dbd3ffdb4c4bba5745ea3bba9c112df80519b7a10"} Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.152230 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmktk" event={"ID":"4dfd68f6-1819-4231-9f69-1fa39c594b27","Type":"ContainerStarted","Data":"25e032e7bdea444f8e23c784605be22f79ed0c9e57605a1ad8c1ae22fa2070b1"} Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.154308 4760 generic.go:334] "Generic (PLEG): container finished" podID="9233a625-86b6-4160-a8b8-7db5a1fe7d23" containerID="e0f1be42591d4be374381a6e4e9f8fd4dba29f3cbfcfd9b3ff8c5fb05dadb2ba" exitCode=0 Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.154548 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4qdxn" event={"ID":"9233a625-86b6-4160-a8b8-7db5a1fe7d23","Type":"ContainerDied","Data":"e0f1be42591d4be374381a6e4e9f8fd4dba29f3cbfcfd9b3ff8c5fb05dadb2ba"} Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.156116 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dv5m7" event={"ID":"c23c83e1-f20b-43ba-bdc8-29929236a384","Type":"ContainerStarted","Data":"3094d241e9142e57f49c5769c02caf186bcafa2a2a76acf779db6fc2a5f1199b"} Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.156173 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dv5m7" event={"ID":"c23c83e1-f20b-43ba-bdc8-29929236a384","Type":"ContainerStarted","Data":"b95ce1c1f6f3f21514a1d2e45685acf8e546e4ba24fc4a6c62ca3ad90797b24e"} Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.171012 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:31 crc kubenswrapper[4760]: E0226 09:44:31.171229 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:31.671186656 +0000 UTC m=+116.805132149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.171286 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dlxqc" event={"ID":"0726f0c9-0bc5-42b5-bb78-af77ad91ecbb","Type":"ContainerStarted","Data":"805a2544d893684a1e6ffdb388e21d3e2f89012aa91b3c70903d6b3f57ce8bfc"} Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.171478 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:31 crc kubenswrapper[4760]: E0226 09:44:31.171932 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:31.671917617 +0000 UTC m=+116.805863110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.202409 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mkg6j" event={"ID":"3bda6877-458b-4632-8677-481e0926441b","Type":"ContainerStarted","Data":"537486baac3fe9124d6371846075d47958135cee42103eb9843e74606354fa93"} Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.213868 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9lh8" event={"ID":"571e2ec3-7e3c-4157-aefd-a6d0004de830","Type":"ContainerStarted","Data":"f2462fbb6c97046972a47429d8781f2ecc3af4429b2c75bdc1c335e16d1122af"} Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.218424 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-nm4ph" event={"ID":"708d73ab-ebcd-4477-becc-dae46b14c8af","Type":"ContainerStarted","Data":"986eecbd55d140ab95d2913ae7cb9ed930682d372d085ed085f64dc659d078b2"} Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.227454 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.229538 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6v588" event={"ID":"de95d7ed-3895-43a6-b422-caae1114b0ec","Type":"ContainerStarted","Data":"3ebfcb75003d2e9db98b357f14a450f4ed040f65680bcd9fd4cf43b70e87378d"} Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.229630 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6v588" event={"ID":"de95d7ed-3895-43a6-b422-caae1114b0ec","Type":"ContainerStarted","Data":"14a2c0655fa2c5ec9de0a0abc4b98996c4efd89aa8e0268d07fb564dada68704"} Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.230122 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-6v588" Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.232173 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-b2fw9" event={"ID":"aef80081-75af-41e5-a0bf-f6a7d0d384bf","Type":"ContainerStarted","Data":"a30925b264dc57723578def0354c1bf32084e4c69b273733b8b34f21b6166159"} Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.232938 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-b2fw9" Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.236202 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-m8s4c" event={"ID":"1d1c8d0d-900e-4dd0-a880-1c6889483328","Type":"ContainerStarted","Data":"e9a97a38819e758cf86651c8b89924a851764dc1f3a84377cef66aa5077b4bd4"} Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.236232 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-m8s4c" event={"ID":"1d1c8d0d-900e-4dd0-a880-1c6889483328","Type":"ContainerStarted","Data":"2afa49461f9dc4979a3218099f7cb7108197df8115c3f5e7f9c3608b32316376"} Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.236249 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-g6gh7" Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.238337 4760 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-b2fw9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.238378 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-b2fw9" podUID="aef80081-75af-41e5-a0bf-f6a7d0d384bf" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.238642 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-6v588 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.238662 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6v588" podUID="de95d7ed-3895-43a6-b422-caae1114b0ec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.240157 4760 patch_prober.go:28] interesting pod/console-operator-58897d9998-g6gh7 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.240230 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-g6gh7" podUID="8676521e-a09e-457c-bd7d-5acd1cc86b3a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.254681 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbsd5"] Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.272402 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.273696 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c95n7"] Feb 26 09:44:31 crc kubenswrapper[4760]: E0226 09:44:31.274284 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:31.774179779 +0000 UTC m=+116.908125272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.297427 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tft7j"] Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.315619 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9s5xj"] Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.351006 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-dv5m7" Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.354724 4760 patch_prober.go:28] interesting pod/router-default-5444994796-dv5m7 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.354791 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dv5m7" podUID="c23c83e1-f20b-43ba-bdc8-29929236a384" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.365821 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hczgn"] Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.377424 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:31 crc kubenswrapper[4760]: E0226 09:44:31.398619 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:31.898598222 +0000 UTC m=+117.032543715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.403649 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-cb5r8"] Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.440801 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-5xjtp"] Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.482621 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:31 crc kubenswrapper[4760]: E0226 09:44:31.483157 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:31.983134869 +0000 UTC m=+117.117080362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.483200 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:31 crc kubenswrapper[4760]: E0226 09:44:31.483660 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:31.983653724 +0000 UTC m=+117.117599217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.526362 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-44s9q"] Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.544980 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hczkw"] Feb 26 09:44:31 crc kubenswrapper[4760]: W0226 09:44:31.571288 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0e52d65_bfe6_4f19_a0a1_2cdf4bf69405.slice/crio-39e67c4089c1468707da534ff74e04776529ad060c6cdfe7bb325af3ac8899d6 WatchSource:0}: Error finding container 39e67c4089c1468707da534ff74e04776529ad060c6cdfe7bb325af3ac8899d6: Status 404 returned error can't find the container with id 39e67c4089c1468707da534ff74e04776529ad060c6cdfe7bb325af3ac8899d6 Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.584217 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:31 crc kubenswrapper[4760]: E0226 09:44:31.584589 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:32.084556147 +0000 UTC m=+117.218501640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.687034 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:31 crc kubenswrapper[4760]: E0226 09:44:31.687460 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:32.187448797 +0000 UTC m=+117.321394280 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.741814 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534970-r2bbh"] Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.788819 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:31 crc kubenswrapper[4760]: E0226 09:44:31.789157 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:32.289138983 +0000 UTC m=+117.423084476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.890602 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:31 crc kubenswrapper[4760]: E0226 09:44:31.890997 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:32.390982803 +0000 UTC m=+117.524928306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.908021 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mpttf"] Feb 26 09:44:31 crc kubenswrapper[4760]: W0226 09:44:31.963120 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc388b29a_9aad_47a6_ba5d_8eabdb4480a6.slice/crio-4c2073a8f1c4b20124e6bc605d146c6f85e888f703d0c834b9491ea18b103767 WatchSource:0}: Error finding container 4c2073a8f1c4b20124e6bc605d146c6f85e888f703d0c834b9491ea18b103767: Status 404 returned error can't find the container with id 4c2073a8f1c4b20124e6bc605d146c6f85e888f703d0c834b9491ea18b103767 Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.991106 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:31 crc kubenswrapper[4760]: E0226 09:44:31.991502 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:32.491486525 +0000 UTC m=+117.625432018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:31 crc kubenswrapper[4760]: I0226 09:44:31.993917 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9fdhq"] Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.092533 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:32 crc kubenswrapper[4760]: E0226 09:44:32.110509 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:32.610488144 +0000 UTC m=+117.744433637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.202966 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-q4p5w"] Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.203622 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:32 crc kubenswrapper[4760]: E0226 09:44:32.203833 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:32.703805051 +0000 UTC m=+117.837750544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.204561 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:32 crc kubenswrapper[4760]: E0226 09:44:32.205470 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:32.705450948 +0000 UTC m=+117.839396441 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.218000 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zr9kg"] Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.252902 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mpttf" event={"ID":"0d8fda26-daaf-42fe-9cb8-6057f9c7abb8","Type":"ContainerStarted","Data":"ca7fd5b7fea230bd13cc7a9bb770d5ca99b5e9c8b658f62d5a8fd9c70ddc8910"} Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.254107 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-dv5m7" podStartSLOduration=59.254089663 podStartE2EDuration="59.254089663s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:32.253048293 +0000 UTC m=+117.386993786" watchObservedRunningTime="2026-02-26 09:44:32.254089663 +0000 UTC m=+117.388035146" Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.265661 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hczgn" event={"ID":"b0e52d65-bfe6-4f19-a0a1-2cdf4bf69405","Type":"ContainerStarted","Data":"39e67c4089c1468707da534ff74e04776529ad060c6cdfe7bb325af3ac8899d6"} Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.271153 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbsd5" event={"ID":"f1c99d97-783b-44bf-b113-d5e3ffbffd6d","Type":"ContainerStarted","Data":"dcd4279e5f8931a4d2a523b1f180723fe848efb0185918b948e69d9cf5ad7eb7"} Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.274400 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cb5r8" event={"ID":"f4d6fe9e-5990-4e8b-8b6f-efbac8600193","Type":"ContainerStarted","Data":"cb8199e401060c7645b05e937087472e217179acb3c35b565ec929b8206c57b7"} Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.274436 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cb5r8" event={"ID":"f4d6fe9e-5990-4e8b-8b6f-efbac8600193","Type":"ContainerStarted","Data":"bf28b4021b18602ce5b38f92e48f361492ab661cdfa3c2e2d1421276de3f057a"} Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.278180 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2sb8r" event={"ID":"405dce73-f4d5-4e66-8516-bece5511cc63","Type":"ContainerStarted","Data":"17b1865822aa67a63726e284e1dd6186efea0ea86b5b369006145bc4f95eb22b"} Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.284040 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9fdhq" event={"ID":"913c298a-1dbe-440a-afb0-3ba32cf96a8c","Type":"ContainerStarted","Data":"5271588190f7b7a2e4618e3a6f7669a9aeca92fcda9e676ed535d76732d3034e"} Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.286160 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5xjtp" event={"ID":"8c9a9e90-0849-4fb8-be6b-3cbc35e1982c","Type":"ContainerStarted","Data":"5cd22d1b6fff19caa72b0ce1c6762fbd75c81434dbefca17c1dbc02ea0f5c08c"} Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.297078 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9lh8" podStartSLOduration=59.297053607 podStartE2EDuration="59.297053607s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:32.296789809 +0000 UTC m=+117.430735302" watchObservedRunningTime="2026-02-26 09:44:32.297053607 +0000 UTC m=+117.430999100" Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.297533 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dlxqc" event={"ID":"0726f0c9-0bc5-42b5-bb78-af77ad91ecbb","Type":"ContainerStarted","Data":"e4c81f3aebfb86a7e1ec7ab276758555adee92a19798e1dc831f7aa19b1d4b17"} Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.297949 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-dlxqc" Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.299760 4760 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-dlxqc container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.299822 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-dlxqc" podUID="0726f0c9-0bc5-42b5-bb78-af77ad91ecbb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.302137 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-nm4ph" event={"ID":"708d73ab-ebcd-4477-becc-dae46b14c8af","Type":"ContainerStarted","Data":"3d703f071814f1ea7a8c022eb27547b6ea35e5c3195a2cbafd498ea354483dbd"} Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.306331 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:32 crc kubenswrapper[4760]: E0226 09:44:32.308488 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:32.808441141 +0000 UTC m=+117.942386784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.324726 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kmqvg"] Feb 26 09:44:32 crc kubenswrapper[4760]: W0226 09:44:32.329008 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71578c2a_f1cc_458d_9f95_058597d6a4b3.slice/crio-7b8814b31c2fd3098805515b6f82cc05ab610a4b59841f17f0c68147c7ada330 WatchSource:0}: Error finding container 7b8814b31c2fd3098805515b6f82cc05ab610a4b59841f17f0c68147c7ada330: Status 404 returned error can't find the container with id 7b8814b31c2fd3098805515b6f82cc05ab610a4b59841f17f0c68147c7ada330 Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.339531 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmktk" event={"ID":"4dfd68f6-1819-4231-9f69-1fa39c594b27","Type":"ContainerStarted","Data":"59c87fabe91b67bcdade854d1f535cdea3932e8d6b6393d447ed539c6e8bc4b9"} Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.341933 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-lhclv" podStartSLOduration=60.341911954 podStartE2EDuration="1m0.341911954s" podCreationTimestamp="2026-02-26 09:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:32.339980409 +0000 UTC m=+117.473925902" watchObservedRunningTime="2026-02-26 09:44:32.341911954 +0000 UTC m=+117.475857447" Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.352245 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnr4g"] Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.352359 4760 patch_prober.go:28] interesting pod/router-default-5444994796-dv5m7 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.352868 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dv5m7" podUID="c23c83e1-f20b-43ba-bdc8-29929236a384" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.358097 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5flk"] Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.358132 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94"] Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.360457 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-l4drh"] Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.361833 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-r2bbh" event={"ID":"c388b29a-9aad-47a6-ba5d-8eabdb4480a6","Type":"ContainerStarted","Data":"4c2073a8f1c4b20124e6bc605d146c6f85e888f703d0c834b9491ea18b103767"} Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.364737 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zgghc" event={"ID":"3b4ba74c-b04c-4def-be1a-4e1304730727","Type":"ContainerStarted","Data":"1bea51bb390c3f51d15a09332ee770c076fd932a54431b041526309f233e243b"} Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.366291 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z9dvk" event={"ID":"fa1b34f5-88f0-49e2-be26-82e6e6ecf4e6","Type":"ContainerStarted","Data":"7c16757d2b129698a45a9e16285f233f05c2071b5981e82d44b33a2003908ab4"} Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.369648 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9s5xj" event={"ID":"34f92d85-5b67-49b0-ac8c-2a16c55c7894","Type":"ContainerStarted","Data":"00ee1a20401881b214b00e1268b442e849d3b85fb823eadc6bd7a4a42742e031"} Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.369674 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9s5xj" event={"ID":"34f92d85-5b67-49b0-ac8c-2a16c55c7894","Type":"ContainerStarted","Data":"297d8ac7a652bd86d90887bc966fb5dac7639faf08e31dea693267605976ab19"} Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.372834 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" event={"ID":"cd519bc0-6b98-495a-bc74-e515b87ec6c1","Type":"ContainerStarted","Data":"a2b12a65872af7b5df387aeaf810fdd9ee7a27b82b1faf036474360fd9c4538b"} Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.380718 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c95n7" event={"ID":"4dc1b5d5-817c-44bd-a819-0d09cae65ce9","Type":"ContainerStarted","Data":"f4febfce1fcfa3981426222ee9eb589740f561ace3ae916d39a5afae5df45e1b"} Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.383082 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hczkw" event={"ID":"9cb8ff53-c9e8-4626-a77e-160660696fbc","Type":"ContainerStarted","Data":"c4a4b115974caed975e14428bcf890c2bc04aa3c6e0d9a6523e485208f3341ac"} Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.385157 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-44s9q" event={"ID":"28cf1469-8d38-4d73-ab81-8e7a3eb86314","Type":"ContainerStarted","Data":"a90cc6a16dce93ccfc9d5122ca23b32ce16bc9237a71769a654f0d67b9bea5cc"} Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.390618 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mkg6j" event={"ID":"3bda6877-458b-4632-8677-481e0926441b","Type":"ContainerStarted","Data":"d740960821b808eacd61346ca394730b03641070a389f49dfd2b27bbc2a4ca5b"} Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.392987 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tft7j" event={"ID":"cceb99fc-acfa-475b-b79c-6209f5040232","Type":"ContainerStarted","Data":"86fd74aa5b3a775662e24d46a4a40b26af1750f1131f282a684ac09145b2aaaf"} Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.398769 4760 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-2tqr5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.398881 4760 patch_prober.go:28] interesting pod/console-operator-58897d9998-g6gh7 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.398966 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-g6gh7" podUID="8676521e-a09e-457c-bd7d-5acd1cc86b3a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.399042 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" podUID="e2b4386d-728b-43e0-83e7-030a977d88dd" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.399071 4760 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-zhxnq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.399217 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq" podUID="dcef4e8d-f319-4f69-8795-3102aebecd9c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.399139 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-6v588 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.399366 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6v588" podUID="de95d7ed-3895-43a6-b422-caae1114b0ec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.401068 4760 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-b2fw9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.401160 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-b2fw9" podUID="aef80081-75af-41e5-a0bf-f6a7d0d384bf" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.409975 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.410088 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53312298-624c-4f35-bdba-cbbf326775d2-metrics-certs\") pod \"network-metrics-daemon-6s89j\" (UID: \"53312298-624c-4f35-bdba-cbbf326775d2\") " pod="openshift-multus/network-metrics-daemon-6s89j" Feb 26 09:44:32 crc kubenswrapper[4760]: E0226 09:44:32.410486 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:32.910469056 +0000 UTC m=+118.044414549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.416263 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq" podStartSLOduration=59.41623411 podStartE2EDuration="59.41623411s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:32.412139664 +0000 UTC m=+117.546085157" watchObservedRunningTime="2026-02-26 09:44:32.41623411 +0000 UTC m=+117.550179603" Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.435042 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53312298-624c-4f35-bdba-cbbf326775d2-metrics-certs\") pod \"network-metrics-daemon-6s89j\" (UID: \"53312298-624c-4f35-bdba-cbbf326775d2\") " pod="openshift-multus/network-metrics-daemon-6s89j" Feb 26 09:44:32 crc kubenswrapper[4760]: W0226 09:44:32.439009 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54d8e12b_f9b5_4c44_857a_582a2d507728.slice/crio-16f74c8ed714d56efbf84ca708c7cb01ae389b3e4b220dfa6ac731a88e91d40b WatchSource:0}: Error finding container 16f74c8ed714d56efbf84ca708c7cb01ae389b3e4b220dfa6ac731a88e91d40b: Status 404 returned error can't find the container with id 16f74c8ed714d56efbf84ca708c7cb01ae389b3e4b220dfa6ac731a88e91d40b Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.471669 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" podStartSLOduration=60.471639088 podStartE2EDuration="1m0.471639088s" podCreationTimestamp="2026-02-26 09:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:32.468851099 +0000 UTC m=+117.602796592" watchObservedRunningTime="2026-02-26 09:44:32.471639088 +0000 UTC m=+117.605584581" Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.492509 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-6s89j" Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.506902 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-6v588" podStartSLOduration=59.506868671 podStartE2EDuration="59.506868671s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:32.504244227 +0000 UTC m=+117.638189730" watchObservedRunningTime="2026-02-26 09:44:32.506868671 +0000 UTC m=+117.640814164" Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.514490 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:32 crc kubenswrapper[4760]: E0226 09:44:32.532683 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:33.032614664 +0000 UTC m=+118.166560157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.575597 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-b2fw9" podStartSLOduration=59.575553747 podStartE2EDuration="59.575553747s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:32.575526686 +0000 UTC m=+117.709472199" watchObservedRunningTime="2026-02-26 09:44:32.575553747 +0000 UTC m=+117.709499240" Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.617497 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:32 crc kubenswrapper[4760]: E0226 09:44:32.617891 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:33.117876912 +0000 UTC m=+118.251822405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.690952 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gq8x8" podStartSLOduration=59.690916631 podStartE2EDuration="59.690916631s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:32.689697167 +0000 UTC m=+117.823642660" watchObservedRunningTime="2026-02-26 09:44:32.690916631 +0000 UTC m=+117.824862124" Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.718428 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:32 crc kubenswrapper[4760]: E0226 09:44:32.719177 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:33.219154925 +0000 UTC m=+118.353100428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.772891 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lf2j2" podStartSLOduration=60.772872705 podStartE2EDuration="1m0.772872705s" podCreationTimestamp="2026-02-26 09:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:32.77199722 +0000 UTC m=+117.905942713" watchObservedRunningTime="2026-02-26 09:44:32.772872705 +0000 UTC m=+117.906818198" Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.822308 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:32 crc kubenswrapper[4760]: E0226 09:44:32.823363 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:33.323348832 +0000 UTC m=+118.457294325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.827642 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-g6gh7" podStartSLOduration=59.827613144 podStartE2EDuration="59.827613144s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:32.821622373 +0000 UTC m=+117.955567866" watchObservedRunningTime="2026-02-26 09:44:32.827613144 +0000 UTC m=+117.961558637" Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.875469 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmktk" podStartSLOduration=59.875443696 podStartE2EDuration="59.875443696s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:32.87381681 +0000 UTC m=+118.007762323" watchObservedRunningTime="2026-02-26 09:44:32.875443696 +0000 UTC m=+118.009389189" Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.929538 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z9dvk" podStartSLOduration=60.929519736 podStartE2EDuration="1m0.929519736s" podCreationTimestamp="2026-02-26 09:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:32.928196548 +0000 UTC m=+118.062142041" watchObservedRunningTime="2026-02-26 09:44:32.929519736 +0000 UTC m=+118.063465229" Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.930084 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:32 crc kubenswrapper[4760]: E0226 09:44:32.930651 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:33.430631417 +0000 UTC m=+118.564576920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.931695 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-w4g8h" podStartSLOduration=5.931658237 podStartE2EDuration="5.931658237s" podCreationTimestamp="2026-02-26 09:44:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:32.908827176 +0000 UTC m=+118.042772669" watchObservedRunningTime="2026-02-26 09:44:32.931658237 +0000 UTC m=+118.065603730" Feb 26 09:44:32 crc kubenswrapper[4760]: I0226 09:44:32.990447 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d9zf8" podStartSLOduration=59.99042424 podStartE2EDuration="59.99042424s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:32.984022148 +0000 UTC m=+118.117967641" watchObservedRunningTime="2026-02-26 09:44:32.99042424 +0000 UTC m=+118.124369723" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.022263 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-dlxqc" podStartSLOduration=60.022243646 podStartE2EDuration="1m0.022243646s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:33.018314204 +0000 UTC m=+118.152259697" watchObservedRunningTime="2026-02-26 09:44:33.022243646 +0000 UTC m=+118.156189139" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.032287 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:33 crc kubenswrapper[4760]: E0226 09:44:33.032693 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:33.532679523 +0000 UTC m=+118.666625016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.043552 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-6s89j"] Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.133072 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:33 crc kubenswrapper[4760]: E0226 09:44:33.133467 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:33.633450943 +0000 UTC m=+118.767396436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.238613 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:33 crc kubenswrapper[4760]: E0226 09:44:33.238981 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:33.738968248 +0000 UTC m=+118.872913741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.345993 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:33 crc kubenswrapper[4760]: E0226 09:44:33.346196 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:33.84616248 +0000 UTC m=+118.980107973 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.346721 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.346769 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.346798 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.346849 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.346899 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:33 crc kubenswrapper[4760]: E0226 09:44:33.348171 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:33.848155157 +0000 UTC m=+118.982100650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.350885 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.358029 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.363095 4760 patch_prober.go:28] interesting pod/router-default-5444994796-dv5m7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 09:44:33 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Feb 26 09:44:33 crc kubenswrapper[4760]: [+]process-running ok Feb 26 09:44:33 crc kubenswrapper[4760]: healthz check failed Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.363182 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dv5m7" podUID="c23c83e1-f20b-43ba-bdc8-29929236a384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.372797 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.380109 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.448392 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:33 crc kubenswrapper[4760]: E0226 09:44:33.448731 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:33.948684839 +0000 UTC m=+119.082630332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.448939 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:33 crc kubenswrapper[4760]: E0226 09:44:33.449420 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:33.9494095 +0000 UTC m=+119.083354983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.455708 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mpttf" event={"ID":"0d8fda26-daaf-42fe-9cb8-6057f9c7abb8","Type":"ContainerStarted","Data":"9895343cb2ef3dacd1ac8d87b22c910a467f6ee05ff7df01b34acd3de069f059"} Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.460296 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5xjtp" event={"ID":"8c9a9e90-0849-4fb8-be6b-3cbc35e1982c","Type":"ContainerStarted","Data":"bf41e0b8a2ec71c7001447552c934e9ea0a155f189ff1c6e5adbebc6bf1c8ab2"} Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.466485 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2sb8r" event={"ID":"405dce73-f4d5-4e66-8516-bece5511cc63","Type":"ContainerStarted","Data":"c742c00092028e0dcee95824e8839a621a5b6c26f14b8baeac35b11e651690f2"} Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.487124 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hczgn" event={"ID":"b0e52d65-bfe6-4f19-a0a1-2cdf4bf69405","Type":"ContainerStarted","Data":"a3cc4a04d6da6888e58ece9824ac6d4d57c7536b674d516d24cb0e9209683322"} Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.495847 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.502777 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.502978 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c95n7" event={"ID":"4dc1b5d5-817c-44bd-a819-0d09cae65ce9","Type":"ContainerStarted","Data":"b28eb28663a9730b68c21cc5051c8db59230348519200c418430ea6cab103555"} Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.509442 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.514742 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zr9kg" event={"ID":"71578c2a-f1cc-458d-9f95-058597d6a4b3","Type":"ContainerStarted","Data":"d878972c5c4a45b6d5f5aac0da3a975d9342aee08497ce3f6c539407e45c7895"} Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.514831 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zr9kg" event={"ID":"71578c2a-f1cc-458d-9f95-058597d6a4b3","Type":"ContainerStarted","Data":"7b8814b31c2fd3098805515b6f82cc05ab610a4b59841f17f0c68147c7ada330"} Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.515662 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zr9kg" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.516694 4760 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-zr9kg container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.516738 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zr9kg" podUID="71578c2a-f1cc-458d-9f95-058597d6a4b3" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.524389 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c95n7" podStartSLOduration=60.524371515 podStartE2EDuration="1m0.524371515s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:33.523591243 +0000 UTC m=+118.657536736" watchObservedRunningTime="2026-02-26 09:44:33.524371515 +0000 UTC m=+118.658317008" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.525735 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2sb8r" podStartSLOduration=61.525724723 podStartE2EDuration="1m1.525724723s" podCreationTimestamp="2026-02-26 09:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:33.4940155 +0000 UTC m=+118.627960993" watchObservedRunningTime="2026-02-26 09:44:33.525724723 +0000 UTC m=+118.659670216" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.533225 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zgghc" event={"ID":"3b4ba74c-b04c-4def-be1a-4e1304730727","Type":"ContainerStarted","Data":"c959b439cee2e2fc8885e49ade64f89a313488380426a48fdcee5a1028277d5a"} Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.536277 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6s89j" event={"ID":"53312298-624c-4f35-bdba-cbbf326775d2","Type":"ContainerStarted","Data":"e23dcfc8df34639c7adc90e1ce52a3ae541f94c1a8c9b8858be714e361f8b2ed"} Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.540183 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kmqvg" event={"ID":"2105337b-ddda-4a9a-bbd8-9442b17eedf5","Type":"ContainerStarted","Data":"8ce4a991c638957b8276820e56c5106946cb52e95fd7f1c6091f703299788039"} Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.540229 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kmqvg" event={"ID":"2105337b-ddda-4a9a-bbd8-9442b17eedf5","Type":"ContainerStarted","Data":"8ff2586077a71e4eeadcfb91287df5e010f7fdad6b8948c5e99523d9cf6a05a7"} Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.554096 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:33 crc kubenswrapper[4760]: E0226 09:44:33.555493 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:34.055476221 +0000 UTC m=+119.189421714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.567203 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" event={"ID":"cd519bc0-6b98-495a-bc74-e515b87ec6c1","Type":"ContainerStarted","Data":"acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b"} Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.568366 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.622873 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.658418 4760 scope.go:117] "RemoveContainer" containerID="6a42004a8b808c4c7fbf7c8f2872c56e8a3de2367477d08143604816366a17b5" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.660971 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zr9kg" podStartSLOduration=60.660933314 podStartE2EDuration="1m0.660933314s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:33.617207728 +0000 UTC m=+118.751153231" watchObservedRunningTime="2026-02-26 09:44:33.660933314 +0000 UTC m=+118.794878807" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.662039 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:33 crc kubenswrapper[4760]: E0226 09:44:33.667843 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:34.167800229 +0000 UTC m=+119.301745722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.694217 4760 generic.go:334] "Generic (PLEG): container finished" podID="9cb8ff53-c9e8-4626-a77e-160660696fbc" containerID="5ce0f27b5c0ef91cebbdd27478b685fc11c564f96bbed4efdbd47b5b59e09965" exitCode=0 Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.694294 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hczkw" event={"ID":"9cb8ff53-c9e8-4626-a77e-160660696fbc","Type":"ContainerDied","Data":"5ce0f27b5c0ef91cebbdd27478b685fc11c564f96bbed4efdbd47b5b59e09965"} Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.703526 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zgghc" podStartSLOduration=60.703496306 podStartE2EDuration="1m0.703496306s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:33.685795942 +0000 UTC m=+118.819741425" watchObservedRunningTime="2026-02-26 09:44:33.703496306 +0000 UTC m=+118.837441799" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.713051 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-kmqvg" podStartSLOduration=6.713004656 podStartE2EDuration="6.713004656s" podCreationTimestamp="2026-02-26 09:44:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:33.712001628 +0000 UTC m=+118.845947141" watchObservedRunningTime="2026-02-26 09:44:33.713004656 +0000 UTC m=+118.846950169" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.741706 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9s5xj" event={"ID":"34f92d85-5b67-49b0-ac8c-2a16c55c7894","Type":"ContainerStarted","Data":"d990b7468d529a6bb57360832d453dc5627b1a5c49d79f15afbc2f860278a5a1"} Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.742921 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" podStartSLOduration=6.742896258 podStartE2EDuration="6.742896258s" podCreationTimestamp="2026-02-26 09:44:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:33.741443576 +0000 UTC m=+118.875389069" watchObservedRunningTime="2026-02-26 09:44:33.742896258 +0000 UTC m=+118.876841751" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.749932 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" event={"ID":"54d8e12b-f9b5-4c44-857a-582a2d507728","Type":"ContainerStarted","Data":"539f1124d63273761d6f0e972f4db9cbaa0c6b68b1931359b74fde61766b9a2f"} Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.749970 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" event={"ID":"54d8e12b-f9b5-4c44-857a-582a2d507728","Type":"ContainerStarted","Data":"16f74c8ed714d56efbf84ca708c7cb01ae389b3e4b220dfa6ac731a88e91d40b"} Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.764996 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:33 crc kubenswrapper[4760]: E0226 09:44:33.767036 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:34.267015744 +0000 UTC m=+119.400961237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.779499 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mkg6j" event={"ID":"3bda6877-458b-4632-8677-481e0926441b","Type":"ContainerStarted","Data":"c5eb4cd461e3cb52ed3a2b90fdd02d7e3e1d95157822db7fd4dbf328858012cb"} Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.802770 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-m8s4c" event={"ID":"1d1c8d0d-900e-4dd0-a880-1c6889483328","Type":"ContainerStarted","Data":"74ed5e6480c3ac7dcd56bb15fee6876f487a998093b6596aadc9c8dee8edd39f"} Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.830012 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-nm4ph" podStartSLOduration=60.829992418 podStartE2EDuration="1m0.829992418s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:33.790668348 +0000 UTC m=+118.924613851" watchObservedRunningTime="2026-02-26 09:44:33.829992418 +0000 UTC m=+118.963937901" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.869495 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:33 crc kubenswrapper[4760]: E0226 09:44:33.870093 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:34.370080049 +0000 UTC m=+119.504025542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.872468 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9s5xj" podStartSLOduration=60.872446227 podStartE2EDuration="1m0.872446227s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:33.870561533 +0000 UTC m=+119.004507026" watchObservedRunningTime="2026-02-26 09:44:33.872446227 +0000 UTC m=+119.006391720" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.902074 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9fdhq" event={"ID":"913c298a-1dbe-440a-afb0-3ba32cf96a8c","Type":"ContainerStarted","Data":"794197a3261cdb60c404968006fa6c2a82eb8faa69e793ef7545056d6d36b512"} Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.903851 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9fdhq" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.906393 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-m8s4c" podStartSLOduration=60.906375243 podStartE2EDuration="1m0.906375243s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:33.905852028 +0000 UTC m=+119.039797531" watchObservedRunningTime="2026-02-26 09:44:33.906375243 +0000 UTC m=+119.040320736" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.931531 4760 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9fdhq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" start-of-body= Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.931881 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9fdhq" podUID="913c298a-1dbe-440a-afb0-3ba32cf96a8c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.939161 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-r2bbh" event={"ID":"c388b29a-9aad-47a6-ba5d-8eabdb4480a6","Type":"ContainerStarted","Data":"83805f3daf9e289b3d03ac337e69da18ae0164fdd273e1f17f15e6d5a2510ef9"} Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.954094 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-44s9q" event={"ID":"28cf1469-8d38-4d73-ab81-8e7a3eb86314","Type":"ContainerStarted","Data":"6a83abc4ec8373a0f6b0322079da14233ab07f1683ceb3a106e75743e417b515"} Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.971096 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:33 crc kubenswrapper[4760]: E0226 09:44:33.972138 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:34.472118395 +0000 UTC m=+119.606063888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.975304 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5flk" event={"ID":"dabee30a-f36d-4123-87b9-71a576d3cc2a","Type":"ContainerStarted","Data":"3ae40e50afdfadd8da6552770a0ce932d533af573495df9ac4f15f2c66c9eb50"} Feb 26 09:44:33 crc kubenswrapper[4760]: I0226 09:44:33.975357 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5flk" event={"ID":"dabee30a-f36d-4123-87b9-71a576d3cc2a","Type":"ContainerStarted","Data":"1b3153ed856d00fe2548be203861633c1bc145486ec9c73be7264cf1c2a7faf6"} Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.005870 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-mkg6j" podStartSLOduration=61.005857276 podStartE2EDuration="1m1.005857276s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:33.976218662 +0000 UTC m=+119.110164165" watchObservedRunningTime="2026-02-26 09:44:34.005857276 +0000 UTC m=+119.139802769" Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.006182 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-r2bbh" podStartSLOduration=62.006178555 podStartE2EDuration="1m2.006178555s" podCreationTimestamp="2026-02-26 09:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:34.005104034 +0000 UTC m=+119.139049527" watchObservedRunningTime="2026-02-26 09:44:34.006178555 +0000 UTC m=+119.140124048" Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.011936 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-79n6q" event={"ID":"efeb18fd-ff9f-4052-94d8-50d892b124b7","Type":"ContainerStarted","Data":"eaf2e1f07b436ba90242403de9038500235a92d7e1e469ac3dfc0155bafc1850"} Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.054012 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-44s9q" podStartSLOduration=61.053998047 podStartE2EDuration="1m1.053998047s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:34.05199354 +0000 UTC m=+119.185939033" watchObservedRunningTime="2026-02-26 09:44:34.053998047 +0000 UTC m=+119.187943540" Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.079852 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q4p5w" event={"ID":"53613d0e-5df3-4b18-8ebd-eb64ad64d487","Type":"ContainerStarted","Data":"5b1c8679f69f47329643ae59ee16503b2fbda3e397598b2be79560f45d379587"} Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.084009 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:34 crc kubenswrapper[4760]: E0226 09:44:34.084760 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:34.584746152 +0000 UTC m=+119.718691645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.113776 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l4drh" event={"ID":"cd389c74-3cf0-4a69-936d-ce93a26d2328","Type":"ContainerStarted","Data":"853cd10e6fbfc88fd59608895d3441c9e91942d713f4d1e0313b2ca90d8d9434"} Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.113823 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l4drh" event={"ID":"cd389c74-3cf0-4a69-936d-ce93a26d2328","Type":"ContainerStarted","Data":"5c0caf7d8d866e0a1ce4e108d5bad47021356cbac3c8977680b8be4677505d9f"} Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.189235 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.189686 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x5flk" podStartSLOduration=61.18967101 podStartE2EDuration="1m1.18967101s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:34.187979722 +0000 UTC m=+119.321925215" watchObservedRunningTime="2026-02-26 09:44:34.18967101 +0000 UTC m=+119.323616493" Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.190181 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9fdhq" podStartSLOduration=61.190178014 podStartE2EDuration="1m1.190178014s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:34.106502612 +0000 UTC m=+119.240448115" watchObservedRunningTime="2026-02-26 09:44:34.190178014 +0000 UTC m=+119.324123497" Feb 26 09:44:34 crc kubenswrapper[4760]: E0226 09:44:34.190606 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:34.690588636 +0000 UTC m=+119.824534129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.248824 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-79n6q" podStartSLOduration=61.248805724 podStartE2EDuration="1m1.248805724s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:34.238029437 +0000 UTC m=+119.371974930" watchObservedRunningTime="2026-02-26 09:44:34.248805724 +0000 UTC m=+119.382751217" Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.288106 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l4drh" podStartSLOduration=61.288084842 podStartE2EDuration="1m1.288084842s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:34.286390644 +0000 UTC m=+119.420336137" watchObservedRunningTime="2026-02-26 09:44:34.288084842 +0000 UTC m=+119.422030335" Feb 26 09:44:34 crc kubenswrapper[4760]: E0226 09:44:34.294912 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:34.794898166 +0000 UTC m=+119.928843659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.295009 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.340917 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnr4g" event={"ID":"26ffe756-78b8-4546-9587-9d031709ba56","Type":"ContainerStarted","Data":"d2cff974b3f885d40e982f96b55e334974cc1697605b0fa624530aed9c1ec7d3"} Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.340970 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnr4g" event={"ID":"26ffe756-78b8-4546-9587-9d031709ba56","Type":"ContainerStarted","Data":"92ff70d2abc0a82beb01b83ae0ff36880bb4a92706ca2f28ce1d15ad3848cea6"} Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.341853 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnr4g" Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.356186 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tft7j" event={"ID":"cceb99fc-acfa-475b-b79c-6209f5040232","Type":"ContainerStarted","Data":"bf495765a2ae81589e78497aef4ea14b176ab544cb11243fd5a62a6b8b920d48"} Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.356920 4760 patch_prober.go:28] interesting pod/router-default-5444994796-dv5m7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 09:44:34 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Feb 26 09:44:34 crc kubenswrapper[4760]: [+]process-running ok Feb 26 09:44:34 crc kubenswrapper[4760]: healthz check failed Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.356949 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dv5m7" podUID="c23c83e1-f20b-43ba-bdc8-29929236a384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.374735 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnr4g" podStartSLOduration=61.374717449 podStartE2EDuration="1m1.374717449s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:34.372534657 +0000 UTC m=+119.506480150" watchObservedRunningTime="2026-02-26 09:44:34.374717449 +0000 UTC m=+119.508662942" Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.388648 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbsd5" event={"ID":"f1c99d97-783b-44bf-b113-d5e3ffbffd6d","Type":"ContainerStarted","Data":"d33be2827a52c815cfcca8a2c06b0e7f74ea52f04195144868e5526a55bf5427"} Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.389735 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbsd5" Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.393787 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4qdxn" event={"ID":"9233a625-86b6-4160-a8b8-7db5a1fe7d23","Type":"ContainerStarted","Data":"5c2f4e6ef84926d5f8802366d2a60c6103f8c1938bf59bd4f82fda3dc3a5d177"} Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.393840 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4qdxn" Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.397797 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:34 crc kubenswrapper[4760]: E0226 09:44:34.398921 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:34.898905558 +0000 UTC m=+120.032851051 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.409957 4760 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-dlxqc container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.410022 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-dlxqc" podUID="0726f0c9-0bc5-42b5-bb78-af77ad91ecbb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.416020 4760 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vbsd5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.416077 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbsd5" podUID="f1c99d97-783b-44bf-b113-d5e3ffbffd6d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.427802 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-b2fw9" Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.468749 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-cb5r8" podStartSLOduration=61.468728906 podStartE2EDuration="1m1.468728906s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:34.457345332 +0000 UTC m=+119.591290845" watchObservedRunningTime="2026-02-26 09:44:34.468728906 +0000 UTC m=+119.602674399" Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.470028 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tft7j" podStartSLOduration=61.470017713 podStartE2EDuration="1m1.470017713s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:34.411006963 +0000 UTC m=+119.544952456" watchObservedRunningTime="2026-02-26 09:44:34.470017713 +0000 UTC m=+119.603963206" Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.480762 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-dpdz4"] Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.494595 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbsd5" podStartSLOduration=61.494562672 podStartE2EDuration="1m1.494562672s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:34.493900733 +0000 UTC m=+119.627846216" watchObservedRunningTime="2026-02-26 09:44:34.494562672 +0000 UTC m=+119.628508165" Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.515548 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:34 crc kubenswrapper[4760]: E0226 09:44:34.519696 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:35.019682747 +0000 UTC m=+120.153628240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.550500 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4qdxn" podStartSLOduration=61.550484605 podStartE2EDuration="1m1.550484605s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:34.548842938 +0000 UTC m=+119.682788431" watchObservedRunningTime="2026-02-26 09:44:34.550484605 +0000 UTC m=+119.684430098" Feb 26 09:44:34 crc kubenswrapper[4760]: W0226 09:44:34.614856 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-ea7b3a6bb95231bd11c1ed78b16ef7518243f2ddee630e56bb1efba2ddafabbd WatchSource:0}: Error finding container ea7b3a6bb95231bd11c1ed78b16ef7518243f2ddee630e56bb1efba2ddafabbd: Status 404 returned error can't find the container with id ea7b3a6bb95231bd11c1ed78b16ef7518243f2ddee630e56bb1efba2ddafabbd Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.616673 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:34 crc kubenswrapper[4760]: E0226 09:44:34.617006 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:35.116989278 +0000 UTC m=+120.250934771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.718242 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:34 crc kubenswrapper[4760]: E0226 09:44:34.718524 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:35.218511759 +0000 UTC m=+120.352457252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.820031 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:34 crc kubenswrapper[4760]: E0226 09:44:34.820705 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:35.320690219 +0000 UTC m=+120.454635712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:34 crc kubenswrapper[4760]: I0226 09:44:34.922434 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:34 crc kubenswrapper[4760]: E0226 09:44:34.922989 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:35.422967601 +0000 UTC m=+120.556913094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.023720 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:35 crc kubenswrapper[4760]: E0226 09:44:35.023833 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:35.523816093 +0000 UTC m=+120.657761586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.023911 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:35 crc kubenswrapper[4760]: E0226 09:44:35.024168 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:35.524158703 +0000 UTC m=+120.658104196 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.089139 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-b2fw9"] Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.124759 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:35 crc kubenswrapper[4760]: E0226 09:44:35.124883 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:35.624862681 +0000 UTC m=+120.758808174 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.136435 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:35 crc kubenswrapper[4760]: E0226 09:44:35.139677 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:35.639647432 +0000 UTC m=+120.773592925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.240372 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:35 crc kubenswrapper[4760]: E0226 09:44:35.240720 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:35.740704929 +0000 UTC m=+120.874650422 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.346593 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:35 crc kubenswrapper[4760]: E0226 09:44:35.347192 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:35.847176051 +0000 UTC m=+120.981121544 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.368131 4760 patch_prober.go:28] interesting pod/router-default-5444994796-dv5m7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 09:44:35 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Feb 26 09:44:35 crc kubenswrapper[4760]: [+]process-running ok Feb 26 09:44:35 crc kubenswrapper[4760]: healthz check failed Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.368191 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dv5m7" podUID="c23c83e1-f20b-43ba-bdc8-29929236a384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.425585 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq"] Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.425854 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq" podUID="dcef4e8d-f319-4f69-8795-3102aebecd9c" containerName="route-controller-manager" containerID="cri-o://a8a312468a0af8401f1680f14f12bd074ded96d26d970315fbd26cbb923812c4" gracePeriod=30 Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.426954 4760 generic.go:334] "Generic (PLEG): container finished" podID="54d8e12b-f9b5-4c44-857a-582a2d507728" containerID="539f1124d63273761d6f0e972f4db9cbaa0c6b68b1931359b74fde61766b9a2f" exitCode=0 Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.427873 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" event={"ID":"54d8e12b-f9b5-4c44-857a-582a2d507728","Type":"ContainerDied","Data":"539f1124d63273761d6f0e972f4db9cbaa0c6b68b1931359b74fde61766b9a2f"} Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.427926 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" event={"ID":"54d8e12b-f9b5-4c44-857a-582a2d507728","Type":"ContainerStarted","Data":"d07fe97565992aabcb30aa9b634057a745c641ce9d0a4690d32053874e187cea"} Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.437255 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq" Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.451023 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mpttf" event={"ID":"0d8fda26-daaf-42fe-9cb8-6057f9c7abb8","Type":"ContainerStarted","Data":"ba81576abddf8dca73d9d08e59d63be68fc1a4276c07dc47cf810e65b991b5a5"} Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.451789 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:35 crc kubenswrapper[4760]: E0226 09:44:35.452216 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:35.952195442 +0000 UTC m=+121.086140935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.467704 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5xjtp" event={"ID":"8c9a9e90-0849-4fb8-be6b-3cbc35e1982c","Type":"ContainerStarted","Data":"b457008fc1c0629de2f86d3e004a0cc47215e85e86e0e2115d392b2e84e9bbda"} Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.475346 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hczkw" event={"ID":"9cb8ff53-c9e8-4626-a77e-160660696fbc","Type":"ContainerStarted","Data":"204aa45d510b0164405277d972ce2051c7106df7633bb01950381f797187420d"} Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.477607 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnr4g" event={"ID":"26ffe756-78b8-4546-9587-9d031709ba56","Type":"ContainerStarted","Data":"6c0c8a88dcd0f425df5084491caf25ed1d496b2c5e4e2a652394c76b7d4bd6f3"} Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.482910 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"82c7e62847cdc9037d7b5be5f4e2f86867075c17f844f9b32d2df225226132e6"} Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.482959 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ea7b3a6bb95231bd11c1ed78b16ef7518243f2ddee630e56bb1efba2ddafabbd"} Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.483362 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.485235 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hczgn" event={"ID":"b0e52d65-bfe6-4f19-a0a1-2cdf4bf69405","Type":"ContainerStarted","Data":"97aee09a119a1db23a8a86e14de1dd46768239c3ef5ed3604e3f2ea098eabc57"} Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.485609 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-hczgn" Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.487224 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"68f9dff1e41c30958fb0521c7e7f78fe1fa96d96f5c44914fc816f18b6e062ee"} Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.487245 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f9765258eaf7cbfe53a0995da82289427acc8074e173beafb8cb31a0f008524c"} Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.488780 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-nm4ph" event={"ID":"708d73ab-ebcd-4477-becc-dae46b14c8af","Type":"ContainerStarted","Data":"9c0de977a7e57487c5f4d893e9576e595712964772ee16736d16b8ca8ca58776"} Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.502336 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"84e00ba07467c0c7e547f4591740164c1fbfaf33a7a204af640e88746d7baa17"} Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.502420 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"81894fab9477ff0d90e6e69827546423b6897675147235a7a6f37d635da25c8b"} Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.507561 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6s89j" event={"ID":"53312298-624c-4f35-bdba-cbbf326775d2","Type":"ContainerStarted","Data":"d37e183a7a09687997a390bc3267c8efc76768267ec716e5cdc31f4eeccdace1"} Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.507622 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-6s89j" event={"ID":"53312298-624c-4f35-bdba-cbbf326775d2","Type":"ContainerStarted","Data":"b0c96fbcf7cb6b75b54c429a03992c216fd7e86de1d772cab0813379d29ac575"} Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.529701 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.535555 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"57c36a2d93b08bc9ea526508ee3c821fdeaff9b07ec98694105d32ec96f2d82f"} Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.535907 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.554556 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:35 crc kubenswrapper[4760]: E0226 09:44:35.557623 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:36.057611434 +0000 UTC m=+121.191556917 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.558110 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q4p5w" event={"ID":"53613d0e-5df3-4b18-8ebd-eb64ad64d487","Type":"ContainerStarted","Data":"6a31d3af9653d9f354ab20a40763fed0da0283550243e3ef60cbe1fe16932be3"} Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.565427 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-dlxqc" Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.574830 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zr9kg" Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.634773 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" podStartSLOduration=62.634754841 podStartE2EDuration="1m2.634754841s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:35.630946912 +0000 UTC m=+120.764892405" watchObservedRunningTime="2026-02-26 09:44:35.634754841 +0000 UTC m=+120.768700334" Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.656145 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:35 crc kubenswrapper[4760]: E0226 09:44:35.657589 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:36.15755831 +0000 UTC m=+121.291503793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.668820 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vbsd5" Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.758790 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:35 crc kubenswrapper[4760]: E0226 09:44:35.759149 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:36.259135942 +0000 UTC m=+121.393081435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.807965 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5xjtp" podStartSLOduration=62.807945432 podStartE2EDuration="1m2.807945432s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:35.722675174 +0000 UTC m=+120.856620667" watchObservedRunningTime="2026-02-26 09:44:35.807945432 +0000 UTC m=+120.941890925" Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.808942 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mpttf" podStartSLOduration=62.80893283 podStartE2EDuration="1m2.80893283s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:35.800581403 +0000 UTC m=+120.934526896" watchObservedRunningTime="2026-02-26 09:44:35.80893283 +0000 UTC m=+120.942878323" Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.860297 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:35 crc kubenswrapper[4760]: E0226 09:44:35.860546 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:36.360519439 +0000 UTC m=+121.494464932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.860748 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:35 crc kubenswrapper[4760]: E0226 09:44:35.861060 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:36.361052705 +0000 UTC m=+121.494998198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.943217 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=30.943197324 podStartE2EDuration="30.943197324s" podCreationTimestamp="2026-02-26 09:44:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:35.942073912 +0000 UTC m=+121.076019405" watchObservedRunningTime="2026-02-26 09:44:35.943197324 +0000 UTC m=+121.077142817" Feb 26 09:44:35 crc kubenswrapper[4760]: I0226 09:44:35.962603 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:35 crc kubenswrapper[4760]: E0226 09:44:35.962973 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:36.462958806 +0000 UTC m=+121.596904299 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.059351 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-6s89j" podStartSLOduration=63.059332431 podStartE2EDuration="1m3.059332431s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:36.053270118 +0000 UTC m=+121.187215601" watchObservedRunningTime="2026-02-26 09:44:36.059332431 +0000 UTC m=+121.193277924" Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.064498 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:36 crc kubenswrapper[4760]: E0226 09:44:36.064893 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:36.564870899 +0000 UTC m=+121.698816392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.065446 4760 ???:1] "http: TLS handshake error from 192.168.126.11:56494: no serving certificate available for the kubelet" Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.099357 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4qdxn" Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.138824 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-hczgn" podStartSLOduration=9.138804194 podStartE2EDuration="9.138804194s" podCreationTimestamp="2026-02-26 09:44:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:36.137051134 +0000 UTC m=+121.270996627" watchObservedRunningTime="2026-02-26 09:44:36.138804194 +0000 UTC m=+121.272749687" Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.147422 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9fdhq" Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.165218 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:36 crc kubenswrapper[4760]: E0226 09:44:36.165371 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:36.66534891 +0000 UTC m=+121.799294403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.165419 4760 ???:1] "http: TLS handshake error from 192.168.126.11:56508: no serving certificate available for the kubelet" Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.165875 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:36 crc kubenswrapper[4760]: E0226 09:44:36.166202 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:36.666193574 +0000 UTC m=+121.800139067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.204829 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.205419 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.211451 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.213278 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.228796 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.228854 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.231820 4760 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-njc94 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.231895 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" podUID="54d8e12b-f9b5-4c44-857a-582a2d507728" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.242124 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.266760 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:36 crc kubenswrapper[4760]: E0226 09:44:36.266875 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:36.766853429 +0000 UTC m=+121.900798922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.267012 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2391eab-226f-4788-8581-fdbffe0b2e95-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e2391eab-226f-4788-8581-fdbffe0b2e95\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.267040 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2391eab-226f-4788-8581-fdbffe0b2e95-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e2391eab-226f-4788-8581-fdbffe0b2e95\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.267107 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:36 crc kubenswrapper[4760]: E0226 09:44:36.267457 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:36.767440376 +0000 UTC m=+121.901385869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.284211 4760 ???:1] "http: TLS handshake error from 192.168.126.11:56512: no serving certificate available for the kubelet" Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.354902 4760 patch_prober.go:28] interesting pod/router-default-5444994796-dv5m7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 09:44:36 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Feb 26 09:44:36 crc kubenswrapper[4760]: [+]process-running ok Feb 26 09:44:36 crc kubenswrapper[4760]: healthz check failed Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.354972 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dv5m7" podUID="c23c83e1-f20b-43ba-bdc8-29929236a384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.368234 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:36 crc kubenswrapper[4760]: E0226 09:44:36.368410 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:36.86838337 +0000 UTC m=+122.002328863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.368457 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2391eab-226f-4788-8581-fdbffe0b2e95-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e2391eab-226f-4788-8581-fdbffe0b2e95\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.368513 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2391eab-226f-4788-8581-fdbffe0b2e95-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e2391eab-226f-4788-8581-fdbffe0b2e95\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.368566 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.368791 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2391eab-226f-4788-8581-fdbffe0b2e95-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e2391eab-226f-4788-8581-fdbffe0b2e95\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 09:44:36 crc kubenswrapper[4760]: E0226 09:44:36.369069 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:36.86905841 +0000 UTC m=+122.003003973 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.383757 4760 ???:1] "http: TLS handshake error from 192.168.126.11:56514: no serving certificate available for the kubelet" Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.406509 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2391eab-226f-4788-8581-fdbffe0b2e95-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e2391eab-226f-4788-8581-fdbffe0b2e95\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.476911 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:36 crc kubenswrapper[4760]: E0226 09:44:36.477527 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:36.977508078 +0000 UTC m=+122.111453561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.501724 4760 ???:1] "http: TLS handshake error from 192.168.126.11:56530: no serving certificate available for the kubelet" Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.531945 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.541587 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.590300 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:36 crc kubenswrapper[4760]: E0226 09:44:36.590840 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:37.090827705 +0000 UTC m=+122.224773198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.617056 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hczkw" event={"ID":"9cb8ff53-c9e8-4626-a77e-160660696fbc","Type":"ContainerStarted","Data":"8e73c434b785da8e4395c152387c22e3f815e788d4a0db06041569308bd1a1d8"} Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.636998 4760 generic.go:334] "Generic (PLEG): container finished" podID="dcef4e8d-f319-4f69-8795-3102aebecd9c" containerID="a8a312468a0af8401f1680f14f12bd074ded96d26d970315fbd26cbb923812c4" exitCode=0 Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.637630 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq" event={"ID":"dcef4e8d-f319-4f69-8795-3102aebecd9c","Type":"ContainerDied","Data":"a8a312468a0af8401f1680f14f12bd074ded96d26d970315fbd26cbb923812c4"} Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.638640 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-b2fw9" podUID="aef80081-75af-41e5-a0bf-f6a7d0d384bf" containerName="controller-manager" containerID="cri-o://a30925b264dc57723578def0354c1bf32084e4c69b273733b8b34f21b6166159" gracePeriod=30 Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.641428 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" podUID="cd519bc0-6b98-495a-bc74-e515b87ec6c1" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b" gracePeriod=30 Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.694272 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:36 crc kubenswrapper[4760]: E0226 09:44:36.695014 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:37.194989631 +0000 UTC m=+122.328935124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.723762 4760 ???:1] "http: TLS handshake error from 192.168.126.11:56532: no serving certificate available for the kubelet" Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.795374 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:36 crc kubenswrapper[4760]: E0226 09:44:36.798513 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:37.298498168 +0000 UTC m=+122.432443741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.799777 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-hczkw" podStartSLOduration=64.799734774 podStartE2EDuration="1m4.799734774s" podCreationTimestamp="2026-02-26 09:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:36.796920273 +0000 UTC m=+121.930865766" watchObservedRunningTime="2026-02-26 09:44:36.799734774 +0000 UTC m=+121.933680267" Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.811542 4760 ???:1] "http: TLS handshake error from 192.168.126.11:56540: no serving certificate available for the kubelet" Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.900439 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:36 crc kubenswrapper[4760]: E0226 09:44:36.900852 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:37.400833222 +0000 UTC m=+122.534778715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:36 crc kubenswrapper[4760]: I0226 09:44:36.959628 4760 ???:1] "http: TLS handshake error from 192.168.126.11:56548: no serving certificate available for the kubelet" Feb 26 09:44:36 crc kubenswrapper[4760]: E0226 09:44:36.986865 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc388b29a_9aad_47a6_ba5d_8eabdb4480a6.slice/crio-conmon-83805f3daf9e289b3d03ac337e69da18ae0164fdd273e1f17f15e6d5a2510ef9.scope\": RecentStats: unable to find data in memory cache]" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.005434 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:37 crc kubenswrapper[4760]: E0226 09:44:37.005931 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:37.505895994 +0000 UTC m=+122.639841487 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.050746 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.106484 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:37 crc kubenswrapper[4760]: E0226 09:44:37.106887 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:37.60686704 +0000 UTC m=+122.740812543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.136816 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.208403 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcef4e8d-f319-4f69-8795-3102aebecd9c-config\") pod \"dcef4e8d-f319-4f69-8795-3102aebecd9c\" (UID: \"dcef4e8d-f319-4f69-8795-3102aebecd9c\") " Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.208750 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dcef4e8d-f319-4f69-8795-3102aebecd9c-client-ca\") pod \"dcef4e8d-f319-4f69-8795-3102aebecd9c\" (UID: \"dcef4e8d-f319-4f69-8795-3102aebecd9c\") " Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.208909 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcef4e8d-f319-4f69-8795-3102aebecd9c-serving-cert\") pod \"dcef4e8d-f319-4f69-8795-3102aebecd9c\" (UID: \"dcef4e8d-f319-4f69-8795-3102aebecd9c\") " Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.208962 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzbvt\" (UniqueName: \"kubernetes.io/projected/dcef4e8d-f319-4f69-8795-3102aebecd9c-kube-api-access-gzbvt\") pod \"dcef4e8d-f319-4f69-8795-3102aebecd9c\" (UID: \"dcef4e8d-f319-4f69-8795-3102aebecd9c\") " Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.214984 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.221861 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcef4e8d-f319-4f69-8795-3102aebecd9c-config" (OuterVolumeSpecName: "config") pod "dcef4e8d-f319-4f69-8795-3102aebecd9c" (UID: "dcef4e8d-f319-4f69-8795-3102aebecd9c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.222833 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcef4e8d-f319-4f69-8795-3102aebecd9c-client-ca" (OuterVolumeSpecName: "client-ca") pod "dcef4e8d-f319-4f69-8795-3102aebecd9c" (UID: "dcef4e8d-f319-4f69-8795-3102aebecd9c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.226625 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcef4e8d-f319-4f69-8795-3102aebecd9c-kube-api-access-gzbvt" (OuterVolumeSpecName: "kube-api-access-gzbvt") pod "dcef4e8d-f319-4f69-8795-3102aebecd9c" (UID: "dcef4e8d-f319-4f69-8795-3102aebecd9c"). InnerVolumeSpecName "kube-api-access-gzbvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:37 crc kubenswrapper[4760]: E0226 09:44:37.227229 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:37.727215677 +0000 UTC m=+122.861161170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.237048 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcef4e8d-f319-4f69-8795-3102aebecd9c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcef4e8d-f319-4f69-8795-3102aebecd9c" (UID: "dcef4e8d-f319-4f69-8795-3102aebecd9c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.298862 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6"] Feb 26 09:44:37 crc kubenswrapper[4760]: E0226 09:44:37.301673 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcef4e8d-f319-4f69-8795-3102aebecd9c" containerName="route-controller-manager" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.301704 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcef4e8d-f319-4f69-8795-3102aebecd9c" containerName="route-controller-manager" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.301853 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcef4e8d-f319-4f69-8795-3102aebecd9c" containerName="route-controller-manager" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.303446 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.303667 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6"] Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.315828 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.316433 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcef4e8d-f319-4f69-8795-3102aebecd9c-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.316513 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dcef4e8d-f319-4f69-8795-3102aebecd9c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.316593 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcef4e8d-f319-4f69-8795-3102aebecd9c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.316666 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzbvt\" (UniqueName: \"kubernetes.io/projected/dcef4e8d-f319-4f69-8795-3102aebecd9c-kube-api-access-gzbvt\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.316532 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g8gj5"] Feb 26 09:44:37 crc kubenswrapper[4760]: E0226 09:44:37.317546 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:37.817527178 +0000 UTC m=+122.951472671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.318324 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g8gj5" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.320033 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.334925 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g8gj5"] Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.354187 4760 patch_prober.go:28] interesting pod/router-default-5444994796-dv5m7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 09:44:37 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Feb 26 09:44:37 crc kubenswrapper[4760]: [+]process-running ok Feb 26 09:44:37 crc kubenswrapper[4760]: healthz check failed Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.354248 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dv5m7" podUID="c23c83e1-f20b-43ba-bdc8-29929236a384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.418351 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bedbd455-baad-4b56-86b7-1d851407744b-catalog-content\") pod \"certified-operators-g8gj5\" (UID: \"bedbd455-baad-4b56-86b7-1d851407744b\") " pod="openshift-marketplace/certified-operators-g8gj5" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.418527 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prsb4\" (UniqueName: \"kubernetes.io/projected/bedbd455-baad-4b56-86b7-1d851407744b-kube-api-access-prsb4\") pod \"certified-operators-g8gj5\" (UID: \"bedbd455-baad-4b56-86b7-1d851407744b\") " pod="openshift-marketplace/certified-operators-g8gj5" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.418591 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62986796-95a2-4ea9-b0ea-e6156ecae439-serving-cert\") pod \"route-controller-manager-5dbdf696cf-whkj6\" (UID: \"62986796-95a2-4ea9-b0ea-e6156ecae439\") " pod="openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.418730 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.418779 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62986796-95a2-4ea9-b0ea-e6156ecae439-config\") pod \"route-controller-manager-5dbdf696cf-whkj6\" (UID: \"62986796-95a2-4ea9-b0ea-e6156ecae439\") " pod="openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.418831 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bedbd455-baad-4b56-86b7-1d851407744b-utilities\") pod \"certified-operators-g8gj5\" (UID: \"bedbd455-baad-4b56-86b7-1d851407744b\") " pod="openshift-marketplace/certified-operators-g8gj5" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.418869 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62986796-95a2-4ea9-b0ea-e6156ecae439-client-ca\") pod \"route-controller-manager-5dbdf696cf-whkj6\" (UID: \"62986796-95a2-4ea9-b0ea-e6156ecae439\") " pod="openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.418981 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64g8t\" (UniqueName: \"kubernetes.io/projected/62986796-95a2-4ea9-b0ea-e6156ecae439-kube-api-access-64g8t\") pod \"route-controller-manager-5dbdf696cf-whkj6\" (UID: \"62986796-95a2-4ea9-b0ea-e6156ecae439\") " pod="openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6" Feb 26 09:44:37 crc kubenswrapper[4760]: E0226 09:44:37.419365 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:37.919354048 +0000 UTC m=+123.053299541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.520460 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.520902 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62986796-95a2-4ea9-b0ea-e6156ecae439-config\") pod \"route-controller-manager-5dbdf696cf-whkj6\" (UID: \"62986796-95a2-4ea9-b0ea-e6156ecae439\") " pod="openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.520933 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bedbd455-baad-4b56-86b7-1d851407744b-utilities\") pod \"certified-operators-g8gj5\" (UID: \"bedbd455-baad-4b56-86b7-1d851407744b\") " pod="openshift-marketplace/certified-operators-g8gj5" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.520948 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62986796-95a2-4ea9-b0ea-e6156ecae439-client-ca\") pod \"route-controller-manager-5dbdf696cf-whkj6\" (UID: \"62986796-95a2-4ea9-b0ea-e6156ecae439\") " pod="openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.520972 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64g8t\" (UniqueName: \"kubernetes.io/projected/62986796-95a2-4ea9-b0ea-e6156ecae439-kube-api-access-64g8t\") pod \"route-controller-manager-5dbdf696cf-whkj6\" (UID: \"62986796-95a2-4ea9-b0ea-e6156ecae439\") " pod="openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.520998 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bedbd455-baad-4b56-86b7-1d851407744b-catalog-content\") pod \"certified-operators-g8gj5\" (UID: \"bedbd455-baad-4b56-86b7-1d851407744b\") " pod="openshift-marketplace/certified-operators-g8gj5" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.521027 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prsb4\" (UniqueName: \"kubernetes.io/projected/bedbd455-baad-4b56-86b7-1d851407744b-kube-api-access-prsb4\") pod \"certified-operators-g8gj5\" (UID: \"bedbd455-baad-4b56-86b7-1d851407744b\") " pod="openshift-marketplace/certified-operators-g8gj5" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.521045 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62986796-95a2-4ea9-b0ea-e6156ecae439-serving-cert\") pod \"route-controller-manager-5dbdf696cf-whkj6\" (UID: \"62986796-95a2-4ea9-b0ea-e6156ecae439\") " pod="openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6" Feb 26 09:44:37 crc kubenswrapper[4760]: E0226 09:44:37.521474 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:38.021417764 +0000 UTC m=+123.155363257 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.521875 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bedbd455-baad-4b56-86b7-1d851407744b-utilities\") pod \"certified-operators-g8gj5\" (UID: \"bedbd455-baad-4b56-86b7-1d851407744b\") " pod="openshift-marketplace/certified-operators-g8gj5" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.521981 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bedbd455-baad-4b56-86b7-1d851407744b-catalog-content\") pod \"certified-operators-g8gj5\" (UID: \"bedbd455-baad-4b56-86b7-1d851407744b\") " pod="openshift-marketplace/certified-operators-g8gj5" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.522681 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62986796-95a2-4ea9-b0ea-e6156ecae439-client-ca\") pod \"route-controller-manager-5dbdf696cf-whkj6\" (UID: \"62986796-95a2-4ea9-b0ea-e6156ecae439\") " pod="openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.524481 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hvl2n"] Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.525406 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvl2n" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.526411 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62986796-95a2-4ea9-b0ea-e6156ecae439-config\") pod \"route-controller-manager-5dbdf696cf-whkj6\" (UID: \"62986796-95a2-4ea9-b0ea-e6156ecae439\") " pod="openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.532966 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.535192 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62986796-95a2-4ea9-b0ea-e6156ecae439-serving-cert\") pod \"route-controller-manager-5dbdf696cf-whkj6\" (UID: \"62986796-95a2-4ea9-b0ea-e6156ecae439\") " pod="openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.541910 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hvl2n"] Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.550751 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64g8t\" (UniqueName: \"kubernetes.io/projected/62986796-95a2-4ea9-b0ea-e6156ecae439-kube-api-access-64g8t\") pod \"route-controller-manager-5dbdf696cf-whkj6\" (UID: \"62986796-95a2-4ea9-b0ea-e6156ecae439\") " pod="openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.555900 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prsb4\" (UniqueName: \"kubernetes.io/projected/bedbd455-baad-4b56-86b7-1d851407744b-kube-api-access-prsb4\") pod \"certified-operators-g8gj5\" (UID: \"bedbd455-baad-4b56-86b7-1d851407744b\") " pod="openshift-marketplace/certified-operators-g8gj5" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.621873 4760 ???:1] "http: TLS handshake error from 192.168.126.11:56556: no serving certificate available for the kubelet" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.622173 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.622280 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7427c503-5c81-488e-b0f0-61b2537a96a4-catalog-content\") pod \"community-operators-hvl2n\" (UID: \"7427c503-5c81-488e-b0f0-61b2537a96a4\") " pod="openshift-marketplace/community-operators-hvl2n" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.622326 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7427c503-5c81-488e-b0f0-61b2537a96a4-utilities\") pod \"community-operators-hvl2n\" (UID: \"7427c503-5c81-488e-b0f0-61b2537a96a4\") " pod="openshift-marketplace/community-operators-hvl2n" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.622358 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbbdv\" (UniqueName: \"kubernetes.io/projected/7427c503-5c81-488e-b0f0-61b2537a96a4-kube-api-access-tbbdv\") pod \"community-operators-hvl2n\" (UID: \"7427c503-5c81-488e-b0f0-61b2537a96a4\") " pod="openshift-marketplace/community-operators-hvl2n" Feb 26 09:44:37 crc kubenswrapper[4760]: E0226 09:44:37.622763 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:38.12274604 +0000 UTC m=+123.256691593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.640135 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.645891 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e2391eab-226f-4788-8581-fdbffe0b2e95","Type":"ContainerStarted","Data":"a81f3b18734b2cd2959196182cf512bb8127ef0ce0b16976be81eb288f2d5068"} Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.647278 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g8gj5" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.654913 4760 generic.go:334] "Generic (PLEG): container finished" podID="aef80081-75af-41e5-a0bf-f6a7d0d384bf" containerID="a30925b264dc57723578def0354c1bf32084e4c69b273733b8b34f21b6166159" exitCode=0 Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.654997 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-b2fw9" event={"ID":"aef80081-75af-41e5-a0bf-f6a7d0d384bf","Type":"ContainerDied","Data":"a30925b264dc57723578def0354c1bf32084e4c69b273733b8b34f21b6166159"} Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.655030 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-b2fw9" event={"ID":"aef80081-75af-41e5-a0bf-f6a7d0d384bf","Type":"ContainerDied","Data":"bf0354bce3c60362e588ecf157fce30d8af9c612fa760541169d6bd17dc97d4f"} Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.655046 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf0354bce3c60362e588ecf157fce30d8af9c612fa760541169d6bd17dc97d4f" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.657813 4760 generic.go:334] "Generic (PLEG): container finished" podID="c388b29a-9aad-47a6-ba5d-8eabdb4480a6" containerID="83805f3daf9e289b3d03ac337e69da18ae0164fdd273e1f17f15e6d5a2510ef9" exitCode=0 Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.657848 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-r2bbh" event={"ID":"c388b29a-9aad-47a6-ba5d-8eabdb4480a6","Type":"ContainerDied","Data":"83805f3daf9e289b3d03ac337e69da18ae0164fdd273e1f17f15e6d5a2510ef9"} Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.660846 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-b2fw9" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.661355 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.661392 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq" event={"ID":"dcef4e8d-f319-4f69-8795-3102aebecd9c","Type":"ContainerDied","Data":"031d500d47638139cb2e733314c3f2cea09e2a5e8c293c8f3d85b26c783e2b67"} Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.661440 4760 scope.go:117] "RemoveContainer" containerID="a8a312468a0af8401f1680f14f12bd074ded96d26d970315fbd26cbb923812c4" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.717220 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq"] Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.718385 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-895t9"] Feb 26 09:44:37 crc kubenswrapper[4760]: E0226 09:44:37.718623 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aef80081-75af-41e5-a0bf-f6a7d0d384bf" containerName="controller-manager" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.718643 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aef80081-75af-41e5-a0bf-f6a7d0d384bf" containerName="controller-manager" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.718780 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aef80081-75af-41e5-a0bf-f6a7d0d384bf" containerName="controller-manager" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.719599 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-895t9" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.722416 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zhxnq"] Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.722940 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:37 crc kubenswrapper[4760]: E0226 09:44:37.723134 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:38.223108538 +0000 UTC m=+123.357054021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.723297 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7427c503-5c81-488e-b0f0-61b2537a96a4-catalog-content\") pod \"community-operators-hvl2n\" (UID: \"7427c503-5c81-488e-b0f0-61b2537a96a4\") " pod="openshift-marketplace/community-operators-hvl2n" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.723344 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7427c503-5c81-488e-b0f0-61b2537a96a4-utilities\") pod \"community-operators-hvl2n\" (UID: \"7427c503-5c81-488e-b0f0-61b2537a96a4\") " pod="openshift-marketplace/community-operators-hvl2n" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.723377 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbbdv\" (UniqueName: \"kubernetes.io/projected/7427c503-5c81-488e-b0f0-61b2537a96a4-kube-api-access-tbbdv\") pod \"community-operators-hvl2n\" (UID: \"7427c503-5c81-488e-b0f0-61b2537a96a4\") " pod="openshift-marketplace/community-operators-hvl2n" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.723428 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:37 crc kubenswrapper[4760]: E0226 09:44:37.723773 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:38.223758196 +0000 UTC m=+123.357703689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.724120 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7427c503-5c81-488e-b0f0-61b2537a96a4-utilities\") pod \"community-operators-hvl2n\" (UID: \"7427c503-5c81-488e-b0f0-61b2537a96a4\") " pod="openshift-marketplace/community-operators-hvl2n" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.724310 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7427c503-5c81-488e-b0f0-61b2537a96a4-catalog-content\") pod \"community-operators-hvl2n\" (UID: \"7427c503-5c81-488e-b0f0-61b2537a96a4\") " pod="openshift-marketplace/community-operators-hvl2n" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.740370 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-895t9"] Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.762120 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbbdv\" (UniqueName: \"kubernetes.io/projected/7427c503-5c81-488e-b0f0-61b2537a96a4-kube-api-access-tbbdv\") pod \"community-operators-hvl2n\" (UID: \"7427c503-5c81-488e-b0f0-61b2537a96a4\") " pod="openshift-marketplace/community-operators-hvl2n" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.825329 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef80081-75af-41e5-a0bf-f6a7d0d384bf-serving-cert\") pod \"aef80081-75af-41e5-a0bf-f6a7d0d384bf\" (UID: \"aef80081-75af-41e5-a0bf-f6a7d0d384bf\") " Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.825410 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aef80081-75af-41e5-a0bf-f6a7d0d384bf-proxy-ca-bundles\") pod \"aef80081-75af-41e5-a0bf-f6a7d0d384bf\" (UID: \"aef80081-75af-41e5-a0bf-f6a7d0d384bf\") " Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.825619 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.825663 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvffw\" (UniqueName: \"kubernetes.io/projected/aef80081-75af-41e5-a0bf-f6a7d0d384bf-kube-api-access-vvffw\") pod \"aef80081-75af-41e5-a0bf-f6a7d0d384bf\" (UID: \"aef80081-75af-41e5-a0bf-f6a7d0d384bf\") " Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.825697 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aef80081-75af-41e5-a0bf-f6a7d0d384bf-client-ca\") pod \"aef80081-75af-41e5-a0bf-f6a7d0d384bf\" (UID: \"aef80081-75af-41e5-a0bf-f6a7d0d384bf\") " Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.825729 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aef80081-75af-41e5-a0bf-f6a7d0d384bf-config\") pod \"aef80081-75af-41e5-a0bf-f6a7d0d384bf\" (UID: \"aef80081-75af-41e5-a0bf-f6a7d0d384bf\") " Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.825886 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/919bb2ab-9fbf-4a58-835e-8348eebaf093-utilities\") pod \"certified-operators-895t9\" (UID: \"919bb2ab-9fbf-4a58-835e-8348eebaf093\") " pod="openshift-marketplace/certified-operators-895t9" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.826015 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/919bb2ab-9fbf-4a58-835e-8348eebaf093-catalog-content\") pod \"certified-operators-895t9\" (UID: \"919bb2ab-9fbf-4a58-835e-8348eebaf093\") " pod="openshift-marketplace/certified-operators-895t9" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.826103 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5ngq\" (UniqueName: \"kubernetes.io/projected/919bb2ab-9fbf-4a58-835e-8348eebaf093-kube-api-access-g5ngq\") pod \"certified-operators-895t9\" (UID: \"919bb2ab-9fbf-4a58-835e-8348eebaf093\") " pod="openshift-marketplace/certified-operators-895t9" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.828199 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aef80081-75af-41e5-a0bf-f6a7d0d384bf-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "aef80081-75af-41e5-a0bf-f6a7d0d384bf" (UID: "aef80081-75af-41e5-a0bf-f6a7d0d384bf"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:37 crc kubenswrapper[4760]: E0226 09:44:37.828261 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:38.328246402 +0000 UTC m=+123.462191885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.830383 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aef80081-75af-41e5-a0bf-f6a7d0d384bf-config" (OuterVolumeSpecName: "config") pod "aef80081-75af-41e5-a0bf-f6a7d0d384bf" (UID: "aef80081-75af-41e5-a0bf-f6a7d0d384bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.831203 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aef80081-75af-41e5-a0bf-f6a7d0d384bf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "aef80081-75af-41e5-a0bf-f6a7d0d384bf" (UID: "aef80081-75af-41e5-a0bf-f6a7d0d384bf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.831607 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aef80081-75af-41e5-a0bf-f6a7d0d384bf-client-ca" (OuterVolumeSpecName: "client-ca") pod "aef80081-75af-41e5-a0bf-f6a7d0d384bf" (UID: "aef80081-75af-41e5-a0bf-f6a7d0d384bf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.833170 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aef80081-75af-41e5-a0bf-f6a7d0d384bf-kube-api-access-vvffw" (OuterVolumeSpecName: "kube-api-access-vvffw") pod "aef80081-75af-41e5-a0bf-f6a7d0d384bf" (UID: "aef80081-75af-41e5-a0bf-f6a7d0d384bf"). InnerVolumeSpecName "kube-api-access-vvffw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.877832 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6"] Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.889108 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvl2n" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.916625 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j58zh"] Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.922835 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j58zh" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.930993 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/919bb2ab-9fbf-4a58-835e-8348eebaf093-utilities\") pod \"certified-operators-895t9\" (UID: \"919bb2ab-9fbf-4a58-835e-8348eebaf093\") " pod="openshift-marketplace/certified-operators-895t9" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.931068 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/919bb2ab-9fbf-4a58-835e-8348eebaf093-catalog-content\") pod \"certified-operators-895t9\" (UID: \"919bb2ab-9fbf-4a58-835e-8348eebaf093\") " pod="openshift-marketplace/certified-operators-895t9" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.931108 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.931136 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5ngq\" (UniqueName: \"kubernetes.io/projected/919bb2ab-9fbf-4a58-835e-8348eebaf093-kube-api-access-g5ngq\") pod \"certified-operators-895t9\" (UID: \"919bb2ab-9fbf-4a58-835e-8348eebaf093\") " pod="openshift-marketplace/certified-operators-895t9" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.931182 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef80081-75af-41e5-a0bf-f6a7d0d384bf-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.931196 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aef80081-75af-41e5-a0bf-f6a7d0d384bf-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.931207 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvffw\" (UniqueName: \"kubernetes.io/projected/aef80081-75af-41e5-a0bf-f6a7d0d384bf-kube-api-access-vvffw\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.931216 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aef80081-75af-41e5-a0bf-f6a7d0d384bf-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.931229 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aef80081-75af-41e5-a0bf-f6a7d0d384bf-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.931485 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/919bb2ab-9fbf-4a58-835e-8348eebaf093-utilities\") pod \"certified-operators-895t9\" (UID: \"919bb2ab-9fbf-4a58-835e-8348eebaf093\") " pod="openshift-marketplace/certified-operators-895t9" Feb 26 09:44:37 crc kubenswrapper[4760]: E0226 09:44:37.931822 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:38.4318036 +0000 UTC m=+123.565749093 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.934075 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j58zh"] Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.934186 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/919bb2ab-9fbf-4a58-835e-8348eebaf093-catalog-content\") pod \"certified-operators-895t9\" (UID: \"919bb2ab-9fbf-4a58-835e-8348eebaf093\") " pod="openshift-marketplace/certified-operators-895t9" Feb 26 09:44:37 crc kubenswrapper[4760]: I0226 09:44:37.946915 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5ngq\" (UniqueName: \"kubernetes.io/projected/919bb2ab-9fbf-4a58-835e-8348eebaf093-kube-api-access-g5ngq\") pod \"certified-operators-895t9\" (UID: \"919bb2ab-9fbf-4a58-835e-8348eebaf093\") " pod="openshift-marketplace/certified-operators-895t9" Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.032381 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.032856 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5f41609-3893-4649-be8b-2a3c839f082a-utilities\") pod \"community-operators-j58zh\" (UID: \"d5f41609-3893-4649-be8b-2a3c839f082a\") " pod="openshift-marketplace/community-operators-j58zh" Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.032885 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlfdt\" (UniqueName: \"kubernetes.io/projected/d5f41609-3893-4649-be8b-2a3c839f082a-kube-api-access-zlfdt\") pod \"community-operators-j58zh\" (UID: \"d5f41609-3893-4649-be8b-2a3c839f082a\") " pod="openshift-marketplace/community-operators-j58zh" Feb 26 09:44:38 crc kubenswrapper[4760]: E0226 09:44:38.032983 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:38.532965471 +0000 UTC m=+123.666910964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.033032 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5f41609-3893-4649-be8b-2a3c839f082a-catalog-content\") pod \"community-operators-j58zh\" (UID: \"d5f41609-3893-4649-be8b-2a3c839f082a\") " pod="openshift-marketplace/community-operators-j58zh" Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.034071 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-895t9" Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.101691 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g8gj5"] Feb 26 09:44:38 crc kubenswrapper[4760]: W0226 09:44:38.125043 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbedbd455_baad_4b56_86b7_1d851407744b.slice/crio-3e7e6b4855be5f06d26fcdf37fdad3eed92f2c82ee81e8546c3eef249789fda6 WatchSource:0}: Error finding container 3e7e6b4855be5f06d26fcdf37fdad3eed92f2c82ee81e8546c3eef249789fda6: Status 404 returned error can't find the container with id 3e7e6b4855be5f06d26fcdf37fdad3eed92f2c82ee81e8546c3eef249789fda6 Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.128054 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hvl2n"] Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.134634 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5f41609-3893-4649-be8b-2a3c839f082a-catalog-content\") pod \"community-operators-j58zh\" (UID: \"d5f41609-3893-4649-be8b-2a3c839f082a\") " pod="openshift-marketplace/community-operators-j58zh" Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.134690 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5f41609-3893-4649-be8b-2a3c839f082a-utilities\") pod \"community-operators-j58zh\" (UID: \"d5f41609-3893-4649-be8b-2a3c839f082a\") " pod="openshift-marketplace/community-operators-j58zh" Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.134719 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlfdt\" (UniqueName: \"kubernetes.io/projected/d5f41609-3893-4649-be8b-2a3c839f082a-kube-api-access-zlfdt\") pod \"community-operators-j58zh\" (UID: \"d5f41609-3893-4649-be8b-2a3c839f082a\") " pod="openshift-marketplace/community-operators-j58zh" Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.134787 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:38 crc kubenswrapper[4760]: E0226 09:44:38.135082 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:38.635067939 +0000 UTC m=+123.769013432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.135327 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5f41609-3893-4649-be8b-2a3c839f082a-utilities\") pod \"community-operators-j58zh\" (UID: \"d5f41609-3893-4649-be8b-2a3c839f082a\") " pod="openshift-marketplace/community-operators-j58zh" Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.135537 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5f41609-3893-4649-be8b-2a3c839f082a-catalog-content\") pod \"community-operators-j58zh\" (UID: \"d5f41609-3893-4649-be8b-2a3c839f082a\") " pod="openshift-marketplace/community-operators-j58zh" Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.155639 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlfdt\" (UniqueName: \"kubernetes.io/projected/d5f41609-3893-4649-be8b-2a3c839f082a-kube-api-access-zlfdt\") pod \"community-operators-j58zh\" (UID: \"d5f41609-3893-4649-be8b-2a3c839f082a\") " pod="openshift-marketplace/community-operators-j58zh" Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.236253 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:38 crc kubenswrapper[4760]: E0226 09:44:38.236701 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:38.736683272 +0000 UTC m=+123.870628775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.248172 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j58zh" Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.338242 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:38 crc kubenswrapper[4760]: E0226 09:44:38.338624 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:38.838558613 +0000 UTC m=+123.972504106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.354167 4760 patch_prober.go:28] interesting pod/router-default-5444994796-dv5m7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 09:44:38 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Feb 26 09:44:38 crc kubenswrapper[4760]: [+]process-running ok Feb 26 09:44:38 crc kubenswrapper[4760]: healthz check failed Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.354221 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dv5m7" podUID="c23c83e1-f20b-43ba-bdc8-29929236a384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.435508 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-895t9"] Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.439167 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:38 crc kubenswrapper[4760]: E0226 09:44:38.439370 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:38.939342103 +0000 UTC m=+124.073287596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.439556 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:38 crc kubenswrapper[4760]: E0226 09:44:38.439968 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:38.939957061 +0000 UTC m=+124.073902544 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.458783 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j58zh"] Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.541172 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:38 crc kubenswrapper[4760]: E0226 09:44:38.541337 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:39.041316037 +0000 UTC m=+124.175261540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.541455 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:38 crc kubenswrapper[4760]: E0226 09:44:38.541792 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:39.04177944 +0000 UTC m=+124.175724933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.586397 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcef4e8d-f319-4f69-8795-3102aebecd9c" path="/var/lib/kubelet/pods/dcef4e8d-f319-4f69-8795-3102aebecd9c/volumes" Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.642622 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:38 crc kubenswrapper[4760]: E0226 09:44:38.642788 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:39.142759216 +0000 UTC m=+124.276704709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.642878 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:38 crc kubenswrapper[4760]: E0226 09:44:38.643251 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:39.143237439 +0000 UTC m=+124.277182942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.668885 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-895t9" event={"ID":"919bb2ab-9fbf-4a58-835e-8348eebaf093","Type":"ContainerStarted","Data":"18be98eaafabddba08432b6b77b097ee17d401243004a9df8c0d005060113b2c"} Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.670810 4760 generic.go:334] "Generic (PLEG): container finished" podID="bedbd455-baad-4b56-86b7-1d851407744b" containerID="69622ef6ea2223c5de40f550fa7c533585273a987663aee91b9c2fdee1f4a9dd" exitCode=0 Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.671019 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8gj5" event={"ID":"bedbd455-baad-4b56-86b7-1d851407744b","Type":"ContainerDied","Data":"69622ef6ea2223c5de40f550fa7c533585273a987663aee91b9c2fdee1f4a9dd"} Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.671074 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8gj5" event={"ID":"bedbd455-baad-4b56-86b7-1d851407744b","Type":"ContainerStarted","Data":"3e7e6b4855be5f06d26fcdf37fdad3eed92f2c82ee81e8546c3eef249789fda6"} Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.672942 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.673822 4760 generic.go:334] "Generic (PLEG): container finished" podID="e2391eab-226f-4788-8581-fdbffe0b2e95" containerID="a09f808659c2134ed0a497407dbbcc0c4bc96f711bb2102a7d29831b52ac0eed" exitCode=0 Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.673889 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e2391eab-226f-4788-8581-fdbffe0b2e95","Type":"ContainerDied","Data":"a09f808659c2134ed0a497407dbbcc0c4bc96f711bb2102a7d29831b52ac0eed"} Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.678365 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j58zh" event={"ID":"d5f41609-3893-4649-be8b-2a3c839f082a","Type":"ContainerStarted","Data":"6908796f6ae8e41cf4f193efa49c7aeb824d1c5d4e37f4b9dddf6374ffbb8aa6"} Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.679738 4760 generic.go:334] "Generic (PLEG): container finished" podID="7427c503-5c81-488e-b0f0-61b2537a96a4" containerID="4e142c57e4454fb7d885fb66e1ffe9c7a7b86316b63cd0bfc1b2d8067e58bdb6" exitCode=0 Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.679794 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvl2n" event={"ID":"7427c503-5c81-488e-b0f0-61b2537a96a4","Type":"ContainerDied","Data":"4e142c57e4454fb7d885fb66e1ffe9c7a7b86316b63cd0bfc1b2d8067e58bdb6"} Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.679814 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvl2n" event={"ID":"7427c503-5c81-488e-b0f0-61b2537a96a4","Type":"ContainerStarted","Data":"f0e5acfb741b2a7ef0520c3c9c95efb62515d71182903a38a6406846ccf3b781"} Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.686106 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6" event={"ID":"62986796-95a2-4ea9-b0ea-e6156ecae439","Type":"ContainerStarted","Data":"bf7b02bc3c30c6c6789f7ea3ce28c1c4328cfb14ee15e7deb2786842904572bc"} Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.686142 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6" Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.686152 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6" event={"ID":"62986796-95a2-4ea9-b0ea-e6156ecae439","Type":"ContainerStarted","Data":"b5e3d1cc0786874ab0c4677c2026531ec918a8ad584b9e78ad5b9c02f06355fb"} Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.686195 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-b2fw9" Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.744497 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:38 crc kubenswrapper[4760]: E0226 09:44:38.744825 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:39.244794801 +0000 UTC m=+124.378740304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.745079 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:38 crc kubenswrapper[4760]: E0226 09:44:38.745380 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:39.245371048 +0000 UTC m=+124.379316541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.791688 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6" podStartSLOduration=2.7916594359999998 podStartE2EDuration="2.791659436s" podCreationTimestamp="2026-02-26 09:44:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:38.774398664 +0000 UTC m=+123.908344177" watchObservedRunningTime="2026-02-26 09:44:38.791659436 +0000 UTC m=+123.925604929" Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.793679 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-b2fw9"] Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.796719 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-b2fw9"] Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.846242 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:38 crc kubenswrapper[4760]: E0226 09:44:38.846384 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:39.346363654 +0000 UTC m=+124.480309147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.846620 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:38 crc kubenswrapper[4760]: E0226 09:44:38.846919 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:39.346907349 +0000 UTC m=+124.480852842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.936867 4760 ???:1] "http: TLS handshake error from 192.168.126.11:56562: no serving certificate available for the kubelet" Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.947089 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:38 crc kubenswrapper[4760]: E0226 09:44:38.947260 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:39.447242836 +0000 UTC m=+124.581188329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.947349 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:38 crc kubenswrapper[4760]: E0226 09:44:38.947627 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:39.447619297 +0000 UTC m=+124.581564790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.961617 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-r2bbh" Feb 26 09:44:38 crc kubenswrapper[4760]: I0226 09:44:38.966632 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.048740 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c388b29a-9aad-47a6-ba5d-8eabdb4480a6-secret-volume\") pod \"c388b29a-9aad-47a6-ba5d-8eabdb4480a6\" (UID: \"c388b29a-9aad-47a6-ba5d-8eabdb4480a6\") " Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.048925 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.048952 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ss4h\" (UniqueName: \"kubernetes.io/projected/c388b29a-9aad-47a6-ba5d-8eabdb4480a6-kube-api-access-2ss4h\") pod \"c388b29a-9aad-47a6-ba5d-8eabdb4480a6\" (UID: \"c388b29a-9aad-47a6-ba5d-8eabdb4480a6\") " Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.049000 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c388b29a-9aad-47a6-ba5d-8eabdb4480a6-config-volume\") pod \"c388b29a-9aad-47a6-ba5d-8eabdb4480a6\" (UID: \"c388b29a-9aad-47a6-ba5d-8eabdb4480a6\") " Feb 26 09:44:39 crc kubenswrapper[4760]: E0226 09:44:39.049083 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:39.549064076 +0000 UTC m=+124.683009569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.049160 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:39 crc kubenswrapper[4760]: E0226 09:44:39.049431 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:39.549423306 +0000 UTC m=+124.683368789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.049637 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c388b29a-9aad-47a6-ba5d-8eabdb4480a6-config-volume" (OuterVolumeSpecName: "config-volume") pod "c388b29a-9aad-47a6-ba5d-8eabdb4480a6" (UID: "c388b29a-9aad-47a6-ba5d-8eabdb4480a6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.056273 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c388b29a-9aad-47a6-ba5d-8eabdb4480a6-kube-api-access-2ss4h" (OuterVolumeSpecName: "kube-api-access-2ss4h") pod "c388b29a-9aad-47a6-ba5d-8eabdb4480a6" (UID: "c388b29a-9aad-47a6-ba5d-8eabdb4480a6"). InnerVolumeSpecName "kube-api-access-2ss4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.056832 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c388b29a-9aad-47a6-ba5d-8eabdb4480a6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c388b29a-9aad-47a6-ba5d-8eabdb4480a6" (UID: "c388b29a-9aad-47a6-ba5d-8eabdb4480a6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.150322 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:39 crc kubenswrapper[4760]: E0226 09:44:39.150676 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:39.650637728 +0000 UTC m=+124.784583221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.150858 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.150903 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ss4h\" (UniqueName: \"kubernetes.io/projected/c388b29a-9aad-47a6-ba5d-8eabdb4480a6-kube-api-access-2ss4h\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.150915 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c388b29a-9aad-47a6-ba5d-8eabdb4480a6-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.150923 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c388b29a-9aad-47a6-ba5d-8eabdb4480a6-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:39 crc kubenswrapper[4760]: E0226 09:44:39.151227 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:39.651211975 +0000 UTC m=+124.785157468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.252274 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:39 crc kubenswrapper[4760]: E0226 09:44:39.252417 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:39.752396886 +0000 UTC m=+124.886342379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.252968 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:39 crc kubenswrapper[4760]: E0226 09:44:39.253712 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:39.753699583 +0000 UTC m=+124.887645086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.296090 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn"] Feb 26 09:44:39 crc kubenswrapper[4760]: E0226 09:44:39.296367 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c388b29a-9aad-47a6-ba5d-8eabdb4480a6" containerName="collect-profiles" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.296394 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c388b29a-9aad-47a6-ba5d-8eabdb4480a6" containerName="collect-profiles" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.296557 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c388b29a-9aad-47a6-ba5d-8eabdb4480a6" containerName="collect-profiles" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.297103 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.299773 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.300027 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.300204 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.303610 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.304070 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.304557 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.315375 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn"] Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.316079 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.327967 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5wz6v"] Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.330040 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wz6v" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.332006 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.342546 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wz6v"] Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.343978 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-g6gh7" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.353775 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:39 crc kubenswrapper[4760]: E0226 09:44:39.354060 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:39.85404707 +0000 UTC m=+124.987992563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.363399 4760 patch_prober.go:28] interesting pod/router-default-5444994796-dv5m7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 09:44:39 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Feb 26 09:44:39 crc kubenswrapper[4760]: [+]process-running ok Feb 26 09:44:39 crc kubenswrapper[4760]: healthz check failed Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.363444 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dv5m7" podUID="c23c83e1-f20b-43ba-bdc8-29929236a384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.455739 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.455791 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqcdq\" (UniqueName: \"kubernetes.io/projected/ac18d765-3a28-4da9-8823-fadbdad35b1d-kube-api-access-sqcdq\") pod \"controller-manager-b98cb7f9b-xfvpn\" (UID: \"ac18d765-3a28-4da9-8823-fadbdad35b1d\") " pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.455812 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b918bed-a785-4a4d-a784-0860bdbadadf-utilities\") pod \"redhat-marketplace-5wz6v\" (UID: \"5b918bed-a785-4a4d-a784-0860bdbadadf\") " pod="openshift-marketplace/redhat-marketplace-5wz6v" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.455832 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ac18d765-3a28-4da9-8823-fadbdad35b1d-proxy-ca-bundles\") pod \"controller-manager-b98cb7f9b-xfvpn\" (UID: \"ac18d765-3a28-4da9-8823-fadbdad35b1d\") " pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.455857 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpqmx\" (UniqueName: \"kubernetes.io/projected/5b918bed-a785-4a4d-a784-0860bdbadadf-kube-api-access-cpqmx\") pod \"redhat-marketplace-5wz6v\" (UID: \"5b918bed-a785-4a4d-a784-0860bdbadadf\") " pod="openshift-marketplace/redhat-marketplace-5wz6v" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.455883 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac18d765-3a28-4da9-8823-fadbdad35b1d-client-ca\") pod \"controller-manager-b98cb7f9b-xfvpn\" (UID: \"ac18d765-3a28-4da9-8823-fadbdad35b1d\") " pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.455902 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b918bed-a785-4a4d-a784-0860bdbadadf-catalog-content\") pod \"redhat-marketplace-5wz6v\" (UID: \"5b918bed-a785-4a4d-a784-0860bdbadadf\") " pod="openshift-marketplace/redhat-marketplace-5wz6v" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.455925 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac18d765-3a28-4da9-8823-fadbdad35b1d-serving-cert\") pod \"controller-manager-b98cb7f9b-xfvpn\" (UID: \"ac18d765-3a28-4da9-8823-fadbdad35b1d\") " pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.455956 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac18d765-3a28-4da9-8823-fadbdad35b1d-config\") pod \"controller-manager-b98cb7f9b-xfvpn\" (UID: \"ac18d765-3a28-4da9-8823-fadbdad35b1d\") " pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" Feb 26 09:44:39 crc kubenswrapper[4760]: E0226 09:44:39.456596 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:39.95656804 +0000 UTC m=+125.090513533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.559157 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:39 crc kubenswrapper[4760]: E0226 09:44:39.559334 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:40.059304435 +0000 UTC m=+125.193249948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.559526 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.559656 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqcdq\" (UniqueName: \"kubernetes.io/projected/ac18d765-3a28-4da9-8823-fadbdad35b1d-kube-api-access-sqcdq\") pod \"controller-manager-b98cb7f9b-xfvpn\" (UID: \"ac18d765-3a28-4da9-8823-fadbdad35b1d\") " pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.559684 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b918bed-a785-4a4d-a784-0860bdbadadf-utilities\") pod \"redhat-marketplace-5wz6v\" (UID: \"5b918bed-a785-4a4d-a784-0860bdbadadf\") " pod="openshift-marketplace/redhat-marketplace-5wz6v" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.559716 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ac18d765-3a28-4da9-8823-fadbdad35b1d-proxy-ca-bundles\") pod \"controller-manager-b98cb7f9b-xfvpn\" (UID: \"ac18d765-3a28-4da9-8823-fadbdad35b1d\") " pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.559743 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpqmx\" (UniqueName: \"kubernetes.io/projected/5b918bed-a785-4a4d-a784-0860bdbadadf-kube-api-access-cpqmx\") pod \"redhat-marketplace-5wz6v\" (UID: \"5b918bed-a785-4a4d-a784-0860bdbadadf\") " pod="openshift-marketplace/redhat-marketplace-5wz6v" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.559771 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac18d765-3a28-4da9-8823-fadbdad35b1d-client-ca\") pod \"controller-manager-b98cb7f9b-xfvpn\" (UID: \"ac18d765-3a28-4da9-8823-fadbdad35b1d\") " pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.559817 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b918bed-a785-4a4d-a784-0860bdbadadf-catalog-content\") pod \"redhat-marketplace-5wz6v\" (UID: \"5b918bed-a785-4a4d-a784-0860bdbadadf\") " pod="openshift-marketplace/redhat-marketplace-5wz6v" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.559846 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac18d765-3a28-4da9-8823-fadbdad35b1d-serving-cert\") pod \"controller-manager-b98cb7f9b-xfvpn\" (UID: \"ac18d765-3a28-4da9-8823-fadbdad35b1d\") " pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.559878 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac18d765-3a28-4da9-8823-fadbdad35b1d-config\") pod \"controller-manager-b98cb7f9b-xfvpn\" (UID: \"ac18d765-3a28-4da9-8823-fadbdad35b1d\") " pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.561474 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ac18d765-3a28-4da9-8823-fadbdad35b1d-proxy-ca-bundles\") pod \"controller-manager-b98cb7f9b-xfvpn\" (UID: \"ac18d765-3a28-4da9-8823-fadbdad35b1d\") " pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.561674 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac18d765-3a28-4da9-8823-fadbdad35b1d-config\") pod \"controller-manager-b98cb7f9b-xfvpn\" (UID: \"ac18d765-3a28-4da9-8823-fadbdad35b1d\") " pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" Feb 26 09:44:39 crc kubenswrapper[4760]: E0226 09:44:39.561778 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:40.061767165 +0000 UTC m=+125.195712748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.562652 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b918bed-a785-4a4d-a784-0860bdbadadf-utilities\") pod \"redhat-marketplace-5wz6v\" (UID: \"5b918bed-a785-4a4d-a784-0860bdbadadf\") " pod="openshift-marketplace/redhat-marketplace-5wz6v" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.563861 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b918bed-a785-4a4d-a784-0860bdbadadf-catalog-content\") pod \"redhat-marketplace-5wz6v\" (UID: \"5b918bed-a785-4a4d-a784-0860bdbadadf\") " pod="openshift-marketplace/redhat-marketplace-5wz6v" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.564996 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac18d765-3a28-4da9-8823-fadbdad35b1d-client-ca\") pod \"controller-manager-b98cb7f9b-xfvpn\" (UID: \"ac18d765-3a28-4da9-8823-fadbdad35b1d\") " pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.575659 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac18d765-3a28-4da9-8823-fadbdad35b1d-serving-cert\") pod \"controller-manager-b98cb7f9b-xfvpn\" (UID: \"ac18d765-3a28-4da9-8823-fadbdad35b1d\") " pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.575938 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.585414 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqcdq\" (UniqueName: \"kubernetes.io/projected/ac18d765-3a28-4da9-8823-fadbdad35b1d-kube-api-access-sqcdq\") pod \"controller-manager-b98cb7f9b-xfvpn\" (UID: \"ac18d765-3a28-4da9-8823-fadbdad35b1d\") " pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.587859 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpqmx\" (UniqueName: \"kubernetes.io/projected/5b918bed-a785-4a4d-a784-0860bdbadadf-kube-api-access-cpqmx\") pod \"redhat-marketplace-5wz6v\" (UID: \"5b918bed-a785-4a4d-a784-0860bdbadadf\") " pod="openshift-marketplace/redhat-marketplace-5wz6v" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.589329 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.629342 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.642438 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-6v588 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.642509 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6v588" podUID="de95d7ed-3895-43a6-b422-caae1114b0ec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.642438 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-6v588 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.642752 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6v588" podUID="de95d7ed-3895-43a6-b422-caae1114b0ec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.650267 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wz6v" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.661070 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:39 crc kubenswrapper[4760]: E0226 09:44:39.661221 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:40.161198487 +0000 UTC m=+125.295143990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.661422 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:39 crc kubenswrapper[4760]: E0226 09:44:39.662551 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:40.162533275 +0000 UTC m=+125.296478958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.727331 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pzmc2"] Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.728651 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pzmc2" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.750475 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-r2bbh" event={"ID":"c388b29a-9aad-47a6-ba5d-8eabdb4480a6","Type":"ContainerDied","Data":"4c2073a8f1c4b20124e6bc605d146c6f85e888f703d0c834b9491ea18b103767"} Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.750534 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c2073a8f1c4b20124e6bc605d146c6f85e888f703d0c834b9491ea18b103767" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.750651 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534970-r2bbh" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.752121 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pzmc2"] Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.762666 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:39 crc kubenswrapper[4760]: E0226 09:44:39.763158 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:40.26313672 +0000 UTC m=+125.397082213 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.787553 4760 generic.go:334] "Generic (PLEG): container finished" podID="d5f41609-3893-4649-be8b-2a3c839f082a" containerID="c622b33a99996df6cb9ea69cde0ed9b643076f621be8c871900828d7f74d218e" exitCode=0 Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.787726 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j58zh" event={"ID":"d5f41609-3893-4649-be8b-2a3c839f082a","Type":"ContainerDied","Data":"c622b33a99996df6cb9ea69cde0ed9b643076f621be8c871900828d7f74d218e"} Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.790830 4760 generic.go:334] "Generic (PLEG): container finished" podID="919bb2ab-9fbf-4a58-835e-8348eebaf093" containerID="b51f8bfc43b353509f9a0f4a77ea423784355620183f6ac96ed47c21da77a606" exitCode=0 Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.790888 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-895t9" event={"ID":"919bb2ab-9fbf-4a58-835e-8348eebaf093","Type":"ContainerDied","Data":"b51f8bfc43b353509f9a0f4a77ea423784355620183f6ac96ed47c21da77a606"} Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.796800 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=0.796776858 podStartE2EDuration="796.776858ms" podCreationTimestamp="2026-02-26 09:44:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:39.774807792 +0000 UTC m=+124.908753285" watchObservedRunningTime="2026-02-26 09:44:39.796776858 +0000 UTC m=+124.930722351" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.828776 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q4p5w" event={"ID":"53613d0e-5df3-4b18-8ebd-eb64ad64d487","Type":"ContainerStarted","Data":"d47f9e93158f004b095b0cc91c09eb67cd07502ba3f874cca4d0a41f6b318a3b"} Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.828864 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q4p5w" event={"ID":"53613d0e-5df3-4b18-8ebd-eb64ad64d487","Type":"ContainerStarted","Data":"ab102a72ddb23ce40be020cec61805eaa3a8047b9d5e2806b79099cd7625b4db"} Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.864790 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e32cadf-ce42-42fd-85de-7cfd1fd43dea-utilities\") pod \"redhat-marketplace-pzmc2\" (UID: \"1e32cadf-ce42-42fd-85de-7cfd1fd43dea\") " pod="openshift-marketplace/redhat-marketplace-pzmc2" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.864860 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxlpk\" (UniqueName: \"kubernetes.io/projected/1e32cadf-ce42-42fd-85de-7cfd1fd43dea-kube-api-access-jxlpk\") pod \"redhat-marketplace-pzmc2\" (UID: \"1e32cadf-ce42-42fd-85de-7cfd1fd43dea\") " pod="openshift-marketplace/redhat-marketplace-pzmc2" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.865131 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.865239 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e32cadf-ce42-42fd-85de-7cfd1fd43dea-catalog-content\") pod \"redhat-marketplace-pzmc2\" (UID: \"1e32cadf-ce42-42fd-85de-7cfd1fd43dea\") " pod="openshift-marketplace/redhat-marketplace-pzmc2" Feb 26 09:44:39 crc kubenswrapper[4760]: E0226 09:44:39.865632 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:40.365613377 +0000 UTC m=+125.499558870 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.957185 4760 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.968821 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.969057 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e32cadf-ce42-42fd-85de-7cfd1fd43dea-catalog-content\") pod \"redhat-marketplace-pzmc2\" (UID: \"1e32cadf-ce42-42fd-85de-7cfd1fd43dea\") " pod="openshift-marketplace/redhat-marketplace-pzmc2" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.969146 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e32cadf-ce42-42fd-85de-7cfd1fd43dea-utilities\") pod \"redhat-marketplace-pzmc2\" (UID: \"1e32cadf-ce42-42fd-85de-7cfd1fd43dea\") " pod="openshift-marketplace/redhat-marketplace-pzmc2" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.969183 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxlpk\" (UniqueName: \"kubernetes.io/projected/1e32cadf-ce42-42fd-85de-7cfd1fd43dea-kube-api-access-jxlpk\") pod \"redhat-marketplace-pzmc2\" (UID: \"1e32cadf-ce42-42fd-85de-7cfd1fd43dea\") " pod="openshift-marketplace/redhat-marketplace-pzmc2" Feb 26 09:44:39 crc kubenswrapper[4760]: E0226 09:44:39.970112 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:40.470088702 +0000 UTC m=+125.604034195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.970306 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e32cadf-ce42-42fd-85de-7cfd1fd43dea-catalog-content\") pod \"redhat-marketplace-pzmc2\" (UID: \"1e32cadf-ce42-42fd-85de-7cfd1fd43dea\") " pod="openshift-marketplace/redhat-marketplace-pzmc2" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.970648 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e32cadf-ce42-42fd-85de-7cfd1fd43dea-utilities\") pod \"redhat-marketplace-pzmc2\" (UID: \"1e32cadf-ce42-42fd-85de-7cfd1fd43dea\") " pod="openshift-marketplace/redhat-marketplace-pzmc2" Feb 26 09:44:39 crc kubenswrapper[4760]: I0226 09:44:39.996766 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxlpk\" (UniqueName: \"kubernetes.io/projected/1e32cadf-ce42-42fd-85de-7cfd1fd43dea-kube-api-access-jxlpk\") pod \"redhat-marketplace-pzmc2\" (UID: \"1e32cadf-ce42-42fd-85de-7cfd1fd43dea\") " pod="openshift-marketplace/redhat-marketplace-pzmc2" Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.073092 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:40 crc kubenswrapper[4760]: E0226 09:44:40.073455 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:40.573441435 +0000 UTC m=+125.707386928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.074806 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pzmc2" Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.116635 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.174284 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:40 crc kubenswrapper[4760]: E0226 09:44:40.175774 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:40.675736718 +0000 UTC m=+125.809682211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.277429 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2391eab-226f-4788-8581-fdbffe0b2e95-kubelet-dir\") pod \"e2391eab-226f-4788-8581-fdbffe0b2e95\" (UID: \"e2391eab-226f-4788-8581-fdbffe0b2e95\") " Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.277540 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2391eab-226f-4788-8581-fdbffe0b2e95-kube-api-access\") pod \"e2391eab-226f-4788-8581-fdbffe0b2e95\" (UID: \"e2391eab-226f-4788-8581-fdbffe0b2e95\") " Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.277770 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.278032 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2391eab-226f-4788-8581-fdbffe0b2e95-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e2391eab-226f-4788-8581-fdbffe0b2e95" (UID: "e2391eab-226f-4788-8581-fdbffe0b2e95"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 09:44:40 crc kubenswrapper[4760]: E0226 09:44:40.278074 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:40.778060582 +0000 UTC m=+125.912006075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.281525 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2391eab-226f-4788-8581-fdbffe0b2e95-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e2391eab-226f-4788-8581-fdbffe0b2e95" (UID: "e2391eab-226f-4788-8581-fdbffe0b2e95"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.348673 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wz6v"] Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.352991 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-dv5m7" Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.357923 4760 patch_prober.go:28] interesting pod/router-default-5444994796-dv5m7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 09:44:40 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Feb 26 09:44:40 crc kubenswrapper[4760]: [+]process-running ok Feb 26 09:44:40 crc kubenswrapper[4760]: healthz check failed Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.358030 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dv5m7" podUID="c23c83e1-f20b-43ba-bdc8-29929236a384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.374360 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn"] Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.377626 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pzmc2"] Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.378532 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.381269 4760 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2391eab-226f-4788-8581-fdbffe0b2e95-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.381319 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2391eab-226f-4788-8581-fdbffe0b2e95-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 09:44:40 crc kubenswrapper[4760]: E0226 09:44:40.383213 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:40.883090943 +0000 UTC m=+126.017036436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:40 crc kubenswrapper[4760]: W0226 09:44:40.395523 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac18d765_3a28_4da9_8823_fadbdad35b1d.slice/crio-e9e50a6b97cc7f4e22e0dc508a5fbecdc28360c69caf1f97bc7cc2b37c982fc3 WatchSource:0}: Error finding container e9e50a6b97cc7f4e22e0dc508a5fbecdc28360c69caf1f97bc7cc2b37c982fc3: Status 404 returned error can't find the container with id e9e50a6b97cc7f4e22e0dc508a5fbecdc28360c69caf1f97bc7cc2b37c982fc3 Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.484505 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:40 crc kubenswrapper[4760]: E0226 09:44:40.485182 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:40.985157689 +0000 UTC m=+126.119103182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.585400 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:40 crc kubenswrapper[4760]: E0226 09:44:40.585757 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:41.085736253 +0000 UTC m=+126.219681746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.586021 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:40 crc kubenswrapper[4760]: E0226 09:44:40.586365 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:41.086355011 +0000 UTC m=+126.220300504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.599555 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aef80081-75af-41e5-a0bf-f6a7d0d384bf" path="/var/lib/kubelet/pods/aef80081-75af-41e5-a0bf-f6a7d0d384bf/volumes" Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.687076 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:40 crc kubenswrapper[4760]: E0226 09:44:40.687284 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:41.187242674 +0000 UTC m=+126.321188167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.687404 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:40 crc kubenswrapper[4760]: E0226 09:44:40.687842 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:41.18782784 +0000 UTC m=+126.321773333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.788686 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:40 crc kubenswrapper[4760]: E0226 09:44:40.788868 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 09:44:41.288813936 +0000 UTC m=+126.422759429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.789388 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:40 crc kubenswrapper[4760]: E0226 09:44:40.789721 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 09:44:41.289710732 +0000 UTC m=+126.423656225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9fjgn" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.813203 4760 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-26T09:44:39.957561775Z","Handler":null,"Name":""} Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.816523 4760 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.816555 4760 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.847032 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" event={"ID":"ac18d765-3a28-4da9-8823-fadbdad35b1d","Type":"ContainerStarted","Data":"e9e50a6b97cc7f4e22e0dc508a5fbecdc28360c69caf1f97bc7cc2b37c982fc3"} Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.849556 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pzmc2" event={"ID":"1e32cadf-ce42-42fd-85de-7cfd1fd43dea","Type":"ContainerStarted","Data":"f426acd90bce33ef4d893b9a4bbde6d22d2085c51bdfc538fbf524042f76024b"} Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.849600 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pzmc2" event={"ID":"1e32cadf-ce42-42fd-85de-7cfd1fd43dea","Type":"ContainerStarted","Data":"e4940b2b67a9e7c14602ac63c403c2c34bf00ad5fc54068ff93746e5df20af71"} Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.851562 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wz6v" event={"ID":"5b918bed-a785-4a4d-a784-0860bdbadadf","Type":"ContainerStarted","Data":"f59191b18d6ebf500c3a306f7beb4c64c891aad2b4e7d80b69eb818617abb7ec"} Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.851611 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wz6v" event={"ID":"5b918bed-a785-4a4d-a784-0860bdbadadf","Type":"ContainerStarted","Data":"911f2d553aeeaaed3500e0724d05f580d36d04f542b7bc767b73d68152d1b053"} Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.854366 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q4p5w" event={"ID":"53613d0e-5df3-4b18-8ebd-eb64ad64d487","Type":"ContainerStarted","Data":"00c143405f747e1ce0f803906c0b0261013005923aaf36137ec67ab744041b77"} Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.856030 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.856137 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e2391eab-226f-4788-8581-fdbffe0b2e95","Type":"ContainerDied","Data":"a81f3b18734b2cd2959196182cf512bb8127ef0ce0b16976be81eb288f2d5068"} Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.856191 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a81f3b18734b2cd2959196182cf512bb8127ef0ce0b16976be81eb288f2d5068" Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.868888 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.869098 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.892618 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-cb5r8" Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.893189 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.893702 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-cb5r8" Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.894044 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-q4p5w" podStartSLOduration=13.894035082 podStartE2EDuration="13.894035082s" podCreationTimestamp="2026-02-26 09:44:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:40.893286711 +0000 UTC m=+126.027232204" watchObservedRunningTime="2026-02-26 09:44:40.894035082 +0000 UTC m=+126.027980565" Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.897792 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.904625 4760 patch_prober.go:28] interesting pod/console-f9d7485db-cb5r8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.42:8443/health\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.904668 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-cb5r8" podUID="f4d6fe9e-5990-4e8b-8b6f-efbac8600193" containerName="console" probeResult="failure" output="Get \"https://10.217.0.42:8443/health\": dial tcp 10.217.0.42:8443: connect: connection refused" Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.938071 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jmvz4"] Feb 26 09:44:40 crc kubenswrapper[4760]: E0226 09:44:40.938433 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2391eab-226f-4788-8581-fdbffe0b2e95" containerName="pruner" Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.938449 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2391eab-226f-4788-8581-fdbffe0b2e95" containerName="pruner" Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.938622 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2391eab-226f-4788-8581-fdbffe0b2e95" containerName="pruner" Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.939629 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmvz4" Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.941608 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jmvz4"] Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.943342 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 26 09:44:40 crc kubenswrapper[4760]: I0226 09:44:40.994672 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.020704 4760 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.020747 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.075713 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9fjgn\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:41 crc kubenswrapper[4760]: E0226 09:44:41.092443 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 26 09:44:41 crc kubenswrapper[4760]: E0226 09:44:41.094621 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.095654 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ee6a724-49ab-489e-84b5-cc2f96c89dc2-catalog-content\") pod \"redhat-operators-jmvz4\" (UID: \"6ee6a724-49ab-489e-84b5-cc2f96c89dc2\") " pod="openshift-marketplace/redhat-operators-jmvz4" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.095693 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc9sx\" (UniqueName: \"kubernetes.io/projected/6ee6a724-49ab-489e-84b5-cc2f96c89dc2-kube-api-access-lc9sx\") pod \"redhat-operators-jmvz4\" (UID: \"6ee6a724-49ab-489e-84b5-cc2f96c89dc2\") " pod="openshift-marketplace/redhat-operators-jmvz4" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.095763 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ee6a724-49ab-489e-84b5-cc2f96c89dc2-utilities\") pod \"redhat-operators-jmvz4\" (UID: \"6ee6a724-49ab-489e-84b5-cc2f96c89dc2\") " pod="openshift-marketplace/redhat-operators-jmvz4" Feb 26 09:44:41 crc kubenswrapper[4760]: E0226 09:44:41.097714 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 26 09:44:41 crc kubenswrapper[4760]: E0226 09:44:41.097781 4760 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" podUID="cd519bc0-6b98-495a-bc74-e515b87ec6c1" containerName="kube-multus-additional-cni-plugins" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.118200 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.126258 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.175100 4760 patch_prober.go:28] interesting pod/apiserver-76f77b778f-hczkw container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 26 09:44:41 crc kubenswrapper[4760]: [+]log ok Feb 26 09:44:41 crc kubenswrapper[4760]: [+]etcd ok Feb 26 09:44:41 crc kubenswrapper[4760]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 26 09:44:41 crc kubenswrapper[4760]: [+]poststarthook/generic-apiserver-start-informers ok Feb 26 09:44:41 crc kubenswrapper[4760]: [+]poststarthook/max-in-flight-filter ok Feb 26 09:44:41 crc kubenswrapper[4760]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 26 09:44:41 crc kubenswrapper[4760]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 26 09:44:41 crc kubenswrapper[4760]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 26 09:44:41 crc kubenswrapper[4760]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 26 09:44:41 crc kubenswrapper[4760]: [+]poststarthook/project.openshift.io-projectcache ok Feb 26 09:44:41 crc kubenswrapper[4760]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 26 09:44:41 crc kubenswrapper[4760]: [+]poststarthook/openshift.io-startinformers ok Feb 26 09:44:41 crc kubenswrapper[4760]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 26 09:44:41 crc kubenswrapper[4760]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 26 09:44:41 crc kubenswrapper[4760]: livez check failed Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.175181 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-hczkw" podUID="9cb8ff53-c9e8-4626-a77e-160660696fbc" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.199804 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ee6a724-49ab-489e-84b5-cc2f96c89dc2-utilities\") pod \"redhat-operators-jmvz4\" (UID: \"6ee6a724-49ab-489e-84b5-cc2f96c89dc2\") " pod="openshift-marketplace/redhat-operators-jmvz4" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.199871 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ee6a724-49ab-489e-84b5-cc2f96c89dc2-catalog-content\") pod \"redhat-operators-jmvz4\" (UID: \"6ee6a724-49ab-489e-84b5-cc2f96c89dc2\") " pod="openshift-marketplace/redhat-operators-jmvz4" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.199892 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc9sx\" (UniqueName: \"kubernetes.io/projected/6ee6a724-49ab-489e-84b5-cc2f96c89dc2-kube-api-access-lc9sx\") pod \"redhat-operators-jmvz4\" (UID: \"6ee6a724-49ab-489e-84b5-cc2f96c89dc2\") " pod="openshift-marketplace/redhat-operators-jmvz4" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.200292 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ee6a724-49ab-489e-84b5-cc2f96c89dc2-utilities\") pod \"redhat-operators-jmvz4\" (UID: \"6ee6a724-49ab-489e-84b5-cc2f96c89dc2\") " pod="openshift-marketplace/redhat-operators-jmvz4" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.200516 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ee6a724-49ab-489e-84b5-cc2f96c89dc2-catalog-content\") pod \"redhat-operators-jmvz4\" (UID: \"6ee6a724-49ab-489e-84b5-cc2f96c89dc2\") " pod="openshift-marketplace/redhat-operators-jmvz4" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.234088 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc9sx\" (UniqueName: \"kubernetes.io/projected/6ee6a724-49ab-489e-84b5-cc2f96c89dc2-kube-api-access-lc9sx\") pod \"redhat-operators-jmvz4\" (UID: \"6ee6a724-49ab-489e-84b5-cc2f96c89dc2\") " pod="openshift-marketplace/redhat-operators-jmvz4" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.244826 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.260034 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-njc94" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.326073 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zzjzl"] Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.328457 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zzjzl" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.381757 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zzjzl"] Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.382008 4760 patch_prober.go:28] interesting pod/router-default-5444994796-dv5m7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 09:44:41 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Feb 26 09:44:41 crc kubenswrapper[4760]: [+]process-running ok Feb 26 09:44:41 crc kubenswrapper[4760]: healthz check failed Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.382085 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dv5m7" podUID="c23c83e1-f20b-43ba-bdc8-29929236a384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.427964 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmvz4" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.535225 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e598e10-dd81-4dce-ad36-a44df83ae7fd-utilities\") pod \"redhat-operators-zzjzl\" (UID: \"3e598e10-dd81-4dce-ad36-a44df83ae7fd\") " pod="openshift-marketplace/redhat-operators-zzjzl" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.535345 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e598e10-dd81-4dce-ad36-a44df83ae7fd-catalog-content\") pod \"redhat-operators-zzjzl\" (UID: \"3e598e10-dd81-4dce-ad36-a44df83ae7fd\") " pod="openshift-marketplace/redhat-operators-zzjzl" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.535401 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vxwh\" (UniqueName: \"kubernetes.io/projected/3e598e10-dd81-4dce-ad36-a44df83ae7fd-kube-api-access-6vxwh\") pod \"redhat-operators-zzjzl\" (UID: \"3e598e10-dd81-4dce-ad36-a44df83ae7fd\") " pod="openshift-marketplace/redhat-operators-zzjzl" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.571517 4760 ???:1] "http: TLS handshake error from 192.168.126.11:56578: no serving certificate available for the kubelet" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.636968 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vxwh\" (UniqueName: \"kubernetes.io/projected/3e598e10-dd81-4dce-ad36-a44df83ae7fd-kube-api-access-6vxwh\") pod \"redhat-operators-zzjzl\" (UID: \"3e598e10-dd81-4dce-ad36-a44df83ae7fd\") " pod="openshift-marketplace/redhat-operators-zzjzl" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.637043 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e598e10-dd81-4dce-ad36-a44df83ae7fd-utilities\") pod \"redhat-operators-zzjzl\" (UID: \"3e598e10-dd81-4dce-ad36-a44df83ae7fd\") " pod="openshift-marketplace/redhat-operators-zzjzl" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.637093 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e598e10-dd81-4dce-ad36-a44df83ae7fd-catalog-content\") pod \"redhat-operators-zzjzl\" (UID: \"3e598e10-dd81-4dce-ad36-a44df83ae7fd\") " pod="openshift-marketplace/redhat-operators-zzjzl" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.637602 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e598e10-dd81-4dce-ad36-a44df83ae7fd-catalog-content\") pod \"redhat-operators-zzjzl\" (UID: \"3e598e10-dd81-4dce-ad36-a44df83ae7fd\") " pod="openshift-marketplace/redhat-operators-zzjzl" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.649685 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e598e10-dd81-4dce-ad36-a44df83ae7fd-utilities\") pod \"redhat-operators-zzjzl\" (UID: \"3e598e10-dd81-4dce-ad36-a44df83ae7fd\") " pod="openshift-marketplace/redhat-operators-zzjzl" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.663473 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vxwh\" (UniqueName: \"kubernetes.io/projected/3e598e10-dd81-4dce-ad36-a44df83ae7fd-kube-api-access-6vxwh\") pod \"redhat-operators-zzjzl\" (UID: \"3e598e10-dd81-4dce-ad36-a44df83ae7fd\") " pod="openshift-marketplace/redhat-operators-zzjzl" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.681714 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9fjgn"] Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.721980 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.722815 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.728304 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.728485 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.735612 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.746758 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1551e68-c39d-4fd3-af08-016df3350106-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e1551e68-c39d-4fd3-af08-016df3350106\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.746853 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e1551e68-c39d-4fd3-af08-016df3350106-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e1551e68-c39d-4fd3-af08-016df3350106\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.849220 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1551e68-c39d-4fd3-af08-016df3350106-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e1551e68-c39d-4fd3-af08-016df3350106\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.849343 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e1551e68-c39d-4fd3-af08-016df3350106-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e1551e68-c39d-4fd3-af08-016df3350106\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.849428 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e1551e68-c39d-4fd3-af08-016df3350106-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e1551e68-c39d-4fd3-af08-016df3350106\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.883265 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1551e68-c39d-4fd3-af08-016df3350106-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e1551e68-c39d-4fd3-af08-016df3350106\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.890413 4760 generic.go:334] "Generic (PLEG): container finished" podID="5b918bed-a785-4a4d-a784-0860bdbadadf" containerID="f59191b18d6ebf500c3a306f7beb4c64c891aad2b4e7d80b69eb818617abb7ec" exitCode=0 Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.890516 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wz6v" event={"ID":"5b918bed-a785-4a4d-a784-0860bdbadadf","Type":"ContainerDied","Data":"f59191b18d6ebf500c3a306f7beb4c64c891aad2b4e7d80b69eb818617abb7ec"} Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.909521 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" event={"ID":"ac18d765-3a28-4da9-8823-fadbdad35b1d","Type":"ContainerStarted","Data":"ce32c95348479c21e565f456918d9f8638b54e4c75942466e72c5b7fecbac4f3"} Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.910131 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.917187 4760 generic.go:334] "Generic (PLEG): container finished" podID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" containerID="f426acd90bce33ef4d893b9a4bbde6d22d2085c51bdfc538fbf524042f76024b" exitCode=0 Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.917255 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pzmc2" event={"ID":"1e32cadf-ce42-42fd-85de-7cfd1fd43dea","Type":"ContainerDied","Data":"f426acd90bce33ef4d893b9a4bbde6d22d2085c51bdfc538fbf524042f76024b"} Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.923237 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" event={"ID":"75bd609c-9135-4d9a-b974-a1b026ac6598","Type":"ContainerStarted","Data":"d0a29cae3cc0ead3a0737a4a639c20f0191b60b9b8c322ec94c0fe94b846426a"} Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.942178 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" podStartSLOduration=6.942156349 podStartE2EDuration="6.942156349s" podCreationTimestamp="2026-02-26 09:44:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:41.941070638 +0000 UTC m=+127.075016131" watchObservedRunningTime="2026-02-26 09:44:41.942156349 +0000 UTC m=+127.076101842" Feb 26 09:44:41 crc kubenswrapper[4760]: I0226 09:44:41.953195 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zzjzl" Feb 26 09:44:42 crc kubenswrapper[4760]: I0226 09:44:42.007396 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" Feb 26 09:44:42 crc kubenswrapper[4760]: I0226 09:44:42.008302 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jmvz4"] Feb 26 09:44:42 crc kubenswrapper[4760]: I0226 09:44:42.075951 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 09:44:42 crc kubenswrapper[4760]: W0226 09:44:42.082371 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ee6a724_49ab_489e_84b5_cc2f96c89dc2.slice/crio-fe9343022b5bfeaf4acafbf9c346d04ff74833038c35d5410666f6aced092770 WatchSource:0}: Error finding container fe9343022b5bfeaf4acafbf9c346d04ff74833038c35d5410666f6aced092770: Status 404 returned error can't find the container with id fe9343022b5bfeaf4acafbf9c346d04ff74833038c35d5410666f6aced092770 Feb 26 09:44:42 crc kubenswrapper[4760]: I0226 09:44:42.361764 4760 patch_prober.go:28] interesting pod/router-default-5444994796-dv5m7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 09:44:42 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Feb 26 09:44:42 crc kubenswrapper[4760]: [+]process-running ok Feb 26 09:44:42 crc kubenswrapper[4760]: healthz check failed Feb 26 09:44:42 crc kubenswrapper[4760]: I0226 09:44:42.361821 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dv5m7" podUID="c23c83e1-f20b-43ba-bdc8-29929236a384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 09:44:42 crc kubenswrapper[4760]: I0226 09:44:42.569624 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zzjzl"] Feb 26 09:44:42 crc kubenswrapper[4760]: I0226 09:44:42.607372 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 26 09:44:42 crc kubenswrapper[4760]: I0226 09:44:42.712802 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 26 09:44:42 crc kubenswrapper[4760]: W0226 09:44:42.760591 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode1551e68_c39d_4fd3_af08_016df3350106.slice/crio-6e64fce70631a36a77adace88a8c5e367dd3089c28f3ff2373c50d2f8de85e79 WatchSource:0}: Error finding container 6e64fce70631a36a77adace88a8c5e367dd3089c28f3ff2373c50d2f8de85e79: Status 404 returned error can't find the container with id 6e64fce70631a36a77adace88a8c5e367dd3089c28f3ff2373c50d2f8de85e79 Feb 26 09:44:42 crc kubenswrapper[4760]: I0226 09:44:42.794144 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-hczgn" Feb 26 09:44:42 crc kubenswrapper[4760]: I0226 09:44:42.832859 4760 ???:1] "http: TLS handshake error from 192.168.126.11:58740: no serving certificate available for the kubelet" Feb 26 09:44:42 crc kubenswrapper[4760]: I0226 09:44:42.939509 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" event={"ID":"75bd609c-9135-4d9a-b974-a1b026ac6598","Type":"ContainerStarted","Data":"6fc7d3df41899b67f2173969149f8f9e28db11efd9039363f6d0b44901a78c17"} Feb 26 09:44:42 crc kubenswrapper[4760]: I0226 09:44:42.939656 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:44:42 crc kubenswrapper[4760]: I0226 09:44:42.947373 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e1551e68-c39d-4fd3-af08-016df3350106","Type":"ContainerStarted","Data":"6e64fce70631a36a77adace88a8c5e367dd3089c28f3ff2373c50d2f8de85e79"} Feb 26 09:44:42 crc kubenswrapper[4760]: I0226 09:44:42.968674 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" podStartSLOduration=69.9686553 podStartE2EDuration="1m9.9686553s" podCreationTimestamp="2026-02-26 09:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:42.968502875 +0000 UTC m=+128.102448368" watchObservedRunningTime="2026-02-26 09:44:42.9686553 +0000 UTC m=+128.102600793" Feb 26 09:44:42 crc kubenswrapper[4760]: I0226 09:44:42.971958 4760 generic.go:334] "Generic (PLEG): container finished" podID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" containerID="4b3b75379e12fe6238455fc2df2a92954020154d37cd285e07580dbec20398d2" exitCode=0 Feb 26 09:44:42 crc kubenswrapper[4760]: I0226 09:44:42.972031 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzjzl" event={"ID":"3e598e10-dd81-4dce-ad36-a44df83ae7fd","Type":"ContainerDied","Data":"4b3b75379e12fe6238455fc2df2a92954020154d37cd285e07580dbec20398d2"} Feb 26 09:44:42 crc kubenswrapper[4760]: I0226 09:44:42.972055 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzjzl" event={"ID":"3e598e10-dd81-4dce-ad36-a44df83ae7fd","Type":"ContainerStarted","Data":"82fcf7e9cad6dde7d719fb70cbb22f18f10719c4d989770540f28dc30a32c654"} Feb 26 09:44:42 crc kubenswrapper[4760]: I0226 09:44:42.974015 4760 generic.go:334] "Generic (PLEG): container finished" podID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" containerID="3ae7f1ac0bc0a7f0e9968b5ef9b4fcb2f804ce7ca5fc50f3a86f751b90d0c13c" exitCode=0 Feb 26 09:44:42 crc kubenswrapper[4760]: I0226 09:44:42.974623 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmvz4" event={"ID":"6ee6a724-49ab-489e-84b5-cc2f96c89dc2","Type":"ContainerDied","Data":"3ae7f1ac0bc0a7f0e9968b5ef9b4fcb2f804ce7ca5fc50f3a86f751b90d0c13c"} Feb 26 09:44:42 crc kubenswrapper[4760]: I0226 09:44:42.974934 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmvz4" event={"ID":"6ee6a724-49ab-489e-84b5-cc2f96c89dc2","Type":"ContainerStarted","Data":"fe9343022b5bfeaf4acafbf9c346d04ff74833038c35d5410666f6aced092770"} Feb 26 09:44:43 crc kubenswrapper[4760]: I0226 09:44:43.354468 4760 patch_prober.go:28] interesting pod/router-default-5444994796-dv5m7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 09:44:43 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Feb 26 09:44:43 crc kubenswrapper[4760]: [+]process-running ok Feb 26 09:44:43 crc kubenswrapper[4760]: healthz check failed Feb 26 09:44:43 crc kubenswrapper[4760]: I0226 09:44:43.354682 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dv5m7" podUID="c23c83e1-f20b-43ba-bdc8-29929236a384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 09:44:43 crc kubenswrapper[4760]: I0226 09:44:43.588072 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 26 09:44:43 crc kubenswrapper[4760]: I0226 09:44:43.983305 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e1551e68-c39d-4fd3-af08-016df3350106","Type":"ContainerStarted","Data":"29e950a4d541c0945ba46a4eda67da48798dcddd89721c61af9939829312c872"} Feb 26 09:44:44 crc kubenswrapper[4760]: I0226 09:44:44.018631 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=1.018616878 podStartE2EDuration="1.018616878s" podCreationTimestamp="2026-02-26 09:44:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:44.017169736 +0000 UTC m=+129.151115249" watchObservedRunningTime="2026-02-26 09:44:44.018616878 +0000 UTC m=+129.152562371" Feb 26 09:44:44 crc kubenswrapper[4760]: I0226 09:44:44.019085 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.019080881 podStartE2EDuration="3.019080881s" podCreationTimestamp="2026-02-26 09:44:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:44:43.995748516 +0000 UTC m=+129.129694009" watchObservedRunningTime="2026-02-26 09:44:44.019080881 +0000 UTC m=+129.153026374" Feb 26 09:44:44 crc kubenswrapper[4760]: I0226 09:44:44.355760 4760 patch_prober.go:28] interesting pod/router-default-5444994796-dv5m7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 09:44:44 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Feb 26 09:44:44 crc kubenswrapper[4760]: [+]process-running ok Feb 26 09:44:44 crc kubenswrapper[4760]: healthz check failed Feb 26 09:44:44 crc kubenswrapper[4760]: I0226 09:44:44.355837 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dv5m7" podUID="c23c83e1-f20b-43ba-bdc8-29929236a384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 09:44:44 crc kubenswrapper[4760]: I0226 09:44:44.991319 4760 generic.go:334] "Generic (PLEG): container finished" podID="e1551e68-c39d-4fd3-af08-016df3350106" containerID="29e950a4d541c0945ba46a4eda67da48798dcddd89721c61af9939829312c872" exitCode=0 Feb 26 09:44:44 crc kubenswrapper[4760]: I0226 09:44:44.991365 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e1551e68-c39d-4fd3-af08-016df3350106","Type":"ContainerDied","Data":"29e950a4d541c0945ba46a4eda67da48798dcddd89721c61af9939829312c872"} Feb 26 09:44:45 crc kubenswrapper[4760]: I0226 09:44:45.355337 4760 patch_prober.go:28] interesting pod/router-default-5444994796-dv5m7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 09:44:45 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Feb 26 09:44:45 crc kubenswrapper[4760]: [+]process-running ok Feb 26 09:44:45 crc kubenswrapper[4760]: healthz check failed Feb 26 09:44:45 crc kubenswrapper[4760]: I0226 09:44:45.355416 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dv5m7" podUID="c23c83e1-f20b-43ba-bdc8-29929236a384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 09:44:45 crc kubenswrapper[4760]: I0226 09:44:45.874928 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:45 crc kubenswrapper[4760]: I0226 09:44:45.879194 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-hczkw" Feb 26 09:44:46 crc kubenswrapper[4760]: I0226 09:44:46.211972 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:44:46 crc kubenswrapper[4760]: I0226 09:44:46.357065 4760 patch_prober.go:28] interesting pod/router-default-5444994796-dv5m7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 09:44:46 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Feb 26 09:44:46 crc kubenswrapper[4760]: [+]process-running ok Feb 26 09:44:46 crc kubenswrapper[4760]: healthz check failed Feb 26 09:44:46 crc kubenswrapper[4760]: I0226 09:44:46.357124 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dv5m7" podUID="c23c83e1-f20b-43ba-bdc8-29929236a384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 09:44:46 crc kubenswrapper[4760]: I0226 09:44:46.716907 4760 ???:1] "http: TLS handshake error from 192.168.126.11:58744: no serving certificate available for the kubelet" Feb 26 09:44:46 crc kubenswrapper[4760]: I0226 09:44:46.762932 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-db5w8" Feb 26 09:44:47 crc kubenswrapper[4760]: I0226 09:44:47.354867 4760 patch_prober.go:28] interesting pod/router-default-5444994796-dv5m7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 09:44:47 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Feb 26 09:44:47 crc kubenswrapper[4760]: [+]process-running ok Feb 26 09:44:47 crc kubenswrapper[4760]: healthz check failed Feb 26 09:44:47 crc kubenswrapper[4760]: I0226 09:44:47.354925 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dv5m7" podUID="c23c83e1-f20b-43ba-bdc8-29929236a384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 09:44:48 crc kubenswrapper[4760]: I0226 09:44:48.354252 4760 patch_prober.go:28] interesting pod/router-default-5444994796-dv5m7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 09:44:48 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Feb 26 09:44:48 crc kubenswrapper[4760]: [+]process-running ok Feb 26 09:44:48 crc kubenswrapper[4760]: healthz check failed Feb 26 09:44:48 crc kubenswrapper[4760]: I0226 09:44:48.354560 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dv5m7" podUID="c23c83e1-f20b-43ba-bdc8-29929236a384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 09:44:49 crc kubenswrapper[4760]: I0226 09:44:49.360418 4760 patch_prober.go:28] interesting pod/router-default-5444994796-dv5m7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 09:44:49 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Feb 26 09:44:49 crc kubenswrapper[4760]: [+]process-running ok Feb 26 09:44:49 crc kubenswrapper[4760]: healthz check failed Feb 26 09:44:49 crc kubenswrapper[4760]: I0226 09:44:49.360529 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dv5m7" podUID="c23c83e1-f20b-43ba-bdc8-29929236a384" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 09:44:49 crc kubenswrapper[4760]: I0226 09:44:49.643001 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-6v588 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 26 09:44:49 crc kubenswrapper[4760]: I0226 09:44:49.643049 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6v588" podUID="de95d7ed-3895-43a6-b422-caae1114b0ec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 26 09:44:49 crc kubenswrapper[4760]: I0226 09:44:49.643098 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-6v588 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 26 09:44:49 crc kubenswrapper[4760]: I0226 09:44:49.643153 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6v588" podUID="de95d7ed-3895-43a6-b422-caae1114b0ec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 26 09:44:50 crc kubenswrapper[4760]: I0226 09:44:50.353866 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-dv5m7" Feb 26 09:44:50 crc kubenswrapper[4760]: I0226 09:44:50.355985 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-dv5m7" Feb 26 09:44:50 crc kubenswrapper[4760]: I0226 09:44:50.910050 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-cb5r8" Feb 26 09:44:50 crc kubenswrapper[4760]: I0226 09:44:50.913637 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-cb5r8" Feb 26 09:44:51 crc kubenswrapper[4760]: E0226 09:44:51.105490 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 26 09:44:51 crc kubenswrapper[4760]: E0226 09:44:51.108011 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 26 09:44:51 crc kubenswrapper[4760]: E0226 09:44:51.109478 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 26 09:44:51 crc kubenswrapper[4760]: E0226 09:44:51.109518 4760 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" podUID="cd519bc0-6b98-495a-bc74-e515b87ec6c1" containerName="kube-multus-additional-cni-plugins" Feb 26 09:44:54 crc kubenswrapper[4760]: I0226 09:44:54.293336 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn"] Feb 26 09:44:54 crc kubenswrapper[4760]: I0226 09:44:54.294295 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" podUID="ac18d765-3a28-4da9-8823-fadbdad35b1d" containerName="controller-manager" containerID="cri-o://ce32c95348479c21e565f456918d9f8638b54e4c75942466e72c5b7fecbac4f3" gracePeriod=30 Feb 26 09:44:54 crc kubenswrapper[4760]: I0226 09:44:54.317143 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6"] Feb 26 09:44:54 crc kubenswrapper[4760]: I0226 09:44:54.317408 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6" podUID="62986796-95a2-4ea9-b0ea-e6156ecae439" containerName="route-controller-manager" containerID="cri-o://bf7b02bc3c30c6c6789f7ea3ce28c1c4328cfb14ee15e7deb2786842904572bc" gracePeriod=30 Feb 26 09:44:56 crc kubenswrapper[4760]: I0226 09:44:56.159312 4760 generic.go:334] "Generic (PLEG): container finished" podID="62986796-95a2-4ea9-b0ea-e6156ecae439" containerID="bf7b02bc3c30c6c6789f7ea3ce28c1c4328cfb14ee15e7deb2786842904572bc" exitCode=0 Feb 26 09:44:56 crc kubenswrapper[4760]: I0226 09:44:56.159399 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6" event={"ID":"62986796-95a2-4ea9-b0ea-e6156ecae439","Type":"ContainerDied","Data":"bf7b02bc3c30c6c6789f7ea3ce28c1c4328cfb14ee15e7deb2786842904572bc"} Feb 26 09:44:56 crc kubenswrapper[4760]: I0226 09:44:56.161894 4760 generic.go:334] "Generic (PLEG): container finished" podID="ac18d765-3a28-4da9-8823-fadbdad35b1d" containerID="ce32c95348479c21e565f456918d9f8638b54e4c75942466e72c5b7fecbac4f3" exitCode=0 Feb 26 09:44:56 crc kubenswrapper[4760]: I0226 09:44:56.161935 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" event={"ID":"ac18d765-3a28-4da9-8823-fadbdad35b1d","Type":"ContainerDied","Data":"ce32c95348479c21e565f456918d9f8638b54e4c75942466e72c5b7fecbac4f3"} Feb 26 09:44:57 crc kubenswrapper[4760]: I0226 09:44:57.641745 4760 patch_prober.go:28] interesting pod/route-controller-manager-5dbdf696cf-whkj6 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.45:8443/healthz\": dial tcp 10.217.0.45:8443: connect: connection refused" start-of-body= Feb 26 09:44:57 crc kubenswrapper[4760]: I0226 09:44:57.641869 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6" podUID="62986796-95a2-4ea9-b0ea-e6156ecae439" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.45:8443/healthz\": dial tcp 10.217.0.45:8443: connect: connection refused" Feb 26 09:44:59 crc kubenswrapper[4760]: I0226 09:44:59.630771 4760 patch_prober.go:28] interesting pod/controller-manager-b98cb7f9b-xfvpn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" start-of-body= Feb 26 09:44:59 crc kubenswrapper[4760]: I0226 09:44:59.631354 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" podUID="ac18d765-3a28-4da9-8823-fadbdad35b1d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" Feb 26 09:44:59 crc kubenswrapper[4760]: I0226 09:44:59.642121 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-6v588 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 26 09:44:59 crc kubenswrapper[4760]: I0226 09:44:59.642137 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-6v588 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 26 09:44:59 crc kubenswrapper[4760]: I0226 09:44:59.642163 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6v588" podUID="de95d7ed-3895-43a6-b422-caae1114b0ec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 26 09:44:59 crc kubenswrapper[4760]: I0226 09:44:59.642196 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-6v588" Feb 26 09:44:59 crc kubenswrapper[4760]: I0226 09:44:59.642163 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6v588" podUID="de95d7ed-3895-43a6-b422-caae1114b0ec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 26 09:44:59 crc kubenswrapper[4760]: I0226 09:44:59.642724 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"3ebfcb75003d2e9db98b357f14a450f4ed040f65680bcd9fd4cf43b70e87378d"} pod="openshift-console/downloads-7954f5f757-6v588" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 26 09:44:59 crc kubenswrapper[4760]: I0226 09:44:59.642767 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-6v588" podUID="de95d7ed-3895-43a6-b422-caae1114b0ec" containerName="download-server" containerID="cri-o://3ebfcb75003d2e9db98b357f14a450f4ed040f65680bcd9fd4cf43b70e87378d" gracePeriod=2 Feb 26 09:44:59 crc kubenswrapper[4760]: I0226 09:44:59.642833 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-6v588 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 26 09:44:59 crc kubenswrapper[4760]: I0226 09:44:59.642894 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6v588" podUID="de95d7ed-3895-43a6-b422-caae1114b0ec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 26 09:45:00 crc kubenswrapper[4760]: I0226 09:45:00.139076 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534985-6vbhg"] Feb 26 09:45:00 crc kubenswrapper[4760]: I0226 09:45:00.139898 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-6vbhg" Feb 26 09:45:00 crc kubenswrapper[4760]: I0226 09:45:00.142542 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 09:45:00 crc kubenswrapper[4760]: I0226 09:45:00.143511 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 09:45:00 crc kubenswrapper[4760]: I0226 09:45:00.152732 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534985-6vbhg"] Feb 26 09:45:00 crc kubenswrapper[4760]: I0226 09:45:00.198201 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c63f9ee9-ee43-4787-a79a-57125c9239a2-secret-volume\") pod \"collect-profiles-29534985-6vbhg\" (UID: \"c63f9ee9-ee43-4787-a79a-57125c9239a2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-6vbhg" Feb 26 09:45:00 crc kubenswrapper[4760]: I0226 09:45:00.198245 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c63f9ee9-ee43-4787-a79a-57125c9239a2-config-volume\") pod \"collect-profiles-29534985-6vbhg\" (UID: \"c63f9ee9-ee43-4787-a79a-57125c9239a2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-6vbhg" Feb 26 09:45:00 crc kubenswrapper[4760]: I0226 09:45:00.198269 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjjkf\" (UniqueName: \"kubernetes.io/projected/c63f9ee9-ee43-4787-a79a-57125c9239a2-kube-api-access-sjjkf\") pod \"collect-profiles-29534985-6vbhg\" (UID: \"c63f9ee9-ee43-4787-a79a-57125c9239a2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-6vbhg" Feb 26 09:45:00 crc kubenswrapper[4760]: I0226 09:45:00.299410 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c63f9ee9-ee43-4787-a79a-57125c9239a2-secret-volume\") pod \"collect-profiles-29534985-6vbhg\" (UID: \"c63f9ee9-ee43-4787-a79a-57125c9239a2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-6vbhg" Feb 26 09:45:00 crc kubenswrapper[4760]: I0226 09:45:00.299470 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c63f9ee9-ee43-4787-a79a-57125c9239a2-config-volume\") pod \"collect-profiles-29534985-6vbhg\" (UID: \"c63f9ee9-ee43-4787-a79a-57125c9239a2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-6vbhg" Feb 26 09:45:00 crc kubenswrapper[4760]: I0226 09:45:00.299505 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjjkf\" (UniqueName: \"kubernetes.io/projected/c63f9ee9-ee43-4787-a79a-57125c9239a2-kube-api-access-sjjkf\") pod \"collect-profiles-29534985-6vbhg\" (UID: \"c63f9ee9-ee43-4787-a79a-57125c9239a2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-6vbhg" Feb 26 09:45:00 crc kubenswrapper[4760]: I0226 09:45:00.300599 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c63f9ee9-ee43-4787-a79a-57125c9239a2-config-volume\") pod \"collect-profiles-29534985-6vbhg\" (UID: \"c63f9ee9-ee43-4787-a79a-57125c9239a2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-6vbhg" Feb 26 09:45:00 crc kubenswrapper[4760]: I0226 09:45:00.306642 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c63f9ee9-ee43-4787-a79a-57125c9239a2-secret-volume\") pod \"collect-profiles-29534985-6vbhg\" (UID: \"c63f9ee9-ee43-4787-a79a-57125c9239a2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-6vbhg" Feb 26 09:45:00 crc kubenswrapper[4760]: I0226 09:45:00.319312 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjjkf\" (UniqueName: \"kubernetes.io/projected/c63f9ee9-ee43-4787-a79a-57125c9239a2-kube-api-access-sjjkf\") pod \"collect-profiles-29534985-6vbhg\" (UID: \"c63f9ee9-ee43-4787-a79a-57125c9239a2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-6vbhg" Feb 26 09:45:00 crc kubenswrapper[4760]: I0226 09:45:00.460662 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-6vbhg" Feb 26 09:45:01 crc kubenswrapper[4760]: E0226 09:45:01.092059 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 26 09:45:01 crc kubenswrapper[4760]: E0226 09:45:01.093650 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 26 09:45:01 crc kubenswrapper[4760]: E0226 09:45:01.094919 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 26 09:45:01 crc kubenswrapper[4760]: E0226 09:45:01.095003 4760 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" podUID="cd519bc0-6b98-495a-bc74-e515b87ec6c1" containerName="kube-multus-additional-cni-plugins" Feb 26 09:45:01 crc kubenswrapper[4760]: I0226 09:45:01.133243 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:45:01 crc kubenswrapper[4760]: I0226 09:45:01.190842 4760 generic.go:334] "Generic (PLEG): container finished" podID="de95d7ed-3895-43a6-b422-caae1114b0ec" containerID="3ebfcb75003d2e9db98b357f14a450f4ed040f65680bcd9fd4cf43b70e87378d" exitCode=0 Feb 26 09:45:01 crc kubenswrapper[4760]: I0226 09:45:01.190889 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6v588" event={"ID":"de95d7ed-3895-43a6-b422-caae1114b0ec","Type":"ContainerDied","Data":"3ebfcb75003d2e9db98b357f14a450f4ed040f65680bcd9fd4cf43b70e87378d"} Feb 26 09:45:07 crc kubenswrapper[4760]: I0226 09:45:07.513279 4760 ???:1] "http: TLS handshake error from 192.168.126.11:51348: no serving certificate available for the kubelet" Feb 26 09:45:08 crc kubenswrapper[4760]: I0226 09:45:08.431377 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-dpdz4_cd519bc0-6b98-495a-bc74-e515b87ec6c1/kube-multus-additional-cni-plugins/0.log" Feb 26 09:45:08 crc kubenswrapper[4760]: I0226 09:45:08.431421 4760 generic.go:334] "Generic (PLEG): container finished" podID="cd519bc0-6b98-495a-bc74-e515b87ec6c1" containerID="acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b" exitCode=137 Feb 26 09:45:08 crc kubenswrapper[4760]: I0226 09:45:08.431486 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" event={"ID":"cd519bc0-6b98-495a-bc74-e515b87ec6c1","Type":"ContainerDied","Data":"acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b"} Feb 26 09:45:08 crc kubenswrapper[4760]: I0226 09:45:08.641215 4760 patch_prober.go:28] interesting pod/route-controller-manager-5dbdf696cf-whkj6 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.45:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:45:08 crc kubenswrapper[4760]: I0226 09:45:08.641274 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6" podUID="62986796-95a2-4ea9-b0ea-e6156ecae439" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.45:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:45:09 crc kubenswrapper[4760]: I0226 09:45:09.631048 4760 patch_prober.go:28] interesting pod/controller-manager-b98cb7f9b-xfvpn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" start-of-body= Feb 26 09:45:09 crc kubenswrapper[4760]: I0226 09:45:09.631150 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" podUID="ac18d765-3a28-4da9-8823-fadbdad35b1d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" Feb 26 09:45:09 crc kubenswrapper[4760]: I0226 09:45:09.642840 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-6v588 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 26 09:45:09 crc kubenswrapper[4760]: I0226 09:45:09.642907 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6v588" podUID="de95d7ed-3895-43a6-b422-caae1114b0ec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 26 09:45:11 crc kubenswrapper[4760]: I0226 09:45:11.028698 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nnr4g" Feb 26 09:45:11 crc kubenswrapper[4760]: E0226 09:45:11.090027 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b is running failed: container process not found" containerID="acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 26 09:45:11 crc kubenswrapper[4760]: E0226 09:45:11.090625 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b is running failed: container process not found" containerID="acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 26 09:45:11 crc kubenswrapper[4760]: E0226 09:45:11.091006 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b is running failed: container process not found" containerID="acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 26 09:45:11 crc kubenswrapper[4760]: E0226 09:45:11.091062 4760 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" podUID="cd519bc0-6b98-495a-bc74-e515b87ec6c1" containerName="kube-multus-additional-cni-plugins" Feb 26 09:45:13 crc kubenswrapper[4760]: I0226 09:45:13.521719 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 26 09:45:13 crc kubenswrapper[4760]: I0226 09:45:13.522945 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 09:45:13 crc kubenswrapper[4760]: I0226 09:45:13.614982 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f02c9c60-3424-47fc-ab87-23a591f3af5d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f02c9c60-3424-47fc-ab87-23a591f3af5d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 09:45:13 crc kubenswrapper[4760]: I0226 09:45:13.615090 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f02c9c60-3424-47fc-ab87-23a591f3af5d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f02c9c60-3424-47fc-ab87-23a591f3af5d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 09:45:13 crc kubenswrapper[4760]: I0226 09:45:13.621078 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 09:45:13 crc kubenswrapper[4760]: I0226 09:45:13.631898 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 26 09:45:13 crc kubenswrapper[4760]: I0226 09:45:13.716589 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f02c9c60-3424-47fc-ab87-23a591f3af5d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f02c9c60-3424-47fc-ab87-23a591f3af5d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 09:45:13 crc kubenswrapper[4760]: I0226 09:45:13.716671 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f02c9c60-3424-47fc-ab87-23a591f3af5d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f02c9c60-3424-47fc-ab87-23a591f3af5d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 09:45:13 crc kubenswrapper[4760]: I0226 09:45:13.716842 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f02c9c60-3424-47fc-ab87-23a591f3af5d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f02c9c60-3424-47fc-ab87-23a591f3af5d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 09:45:13 crc kubenswrapper[4760]: I0226 09:45:13.735811 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f02c9c60-3424-47fc-ab87-23a591f3af5d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f02c9c60-3424-47fc-ab87-23a591f3af5d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 09:45:13 crc kubenswrapper[4760]: I0226 09:45:13.845767 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 09:45:18 crc kubenswrapper[4760]: I0226 09:45:18.641338 4760 patch_prober.go:28] interesting pod/route-controller-manager-5dbdf696cf-whkj6 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.45:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:45:18 crc kubenswrapper[4760]: I0226 09:45:18.641843 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6" podUID="62986796-95a2-4ea9-b0ea-e6156ecae439" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.45:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:45:19 crc kubenswrapper[4760]: I0226 09:45:19.631774 4760 patch_prober.go:28] interesting pod/controller-manager-b98cb7f9b-xfvpn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" start-of-body= Feb 26 09:45:19 crc kubenswrapper[4760]: I0226 09:45:19.632093 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" podUID="ac18d765-3a28-4da9-8823-fadbdad35b1d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" Feb 26 09:45:19 crc kubenswrapper[4760]: I0226 09:45:19.642801 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-6v588 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 26 09:45:19 crc kubenswrapper[4760]: I0226 09:45:19.642905 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6v588" podUID="de95d7ed-3895-43a6-b422-caae1114b0ec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 26 09:45:19 crc kubenswrapper[4760]: I0226 09:45:19.715639 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 26 09:45:19 crc kubenswrapper[4760]: I0226 09:45:19.718100 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 26 09:45:19 crc kubenswrapper[4760]: I0226 09:45:19.739188 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 26 09:45:19 crc kubenswrapper[4760]: I0226 09:45:19.739347 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2217860c-1b72-4728-9f27-d13f66cd5e7b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2217860c-1b72-4728-9f27-d13f66cd5e7b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 09:45:19 crc kubenswrapper[4760]: I0226 09:45:19.740204 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2217860c-1b72-4728-9f27-d13f66cd5e7b-var-lock\") pod \"installer-9-crc\" (UID: \"2217860c-1b72-4728-9f27-d13f66cd5e7b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 09:45:19 crc kubenswrapper[4760]: I0226 09:45:19.741175 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2217860c-1b72-4728-9f27-d13f66cd5e7b-kube-api-access\") pod \"installer-9-crc\" (UID: \"2217860c-1b72-4728-9f27-d13f66cd5e7b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 09:45:19 crc kubenswrapper[4760]: I0226 09:45:19.843274 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2217860c-1b72-4728-9f27-d13f66cd5e7b-kube-api-access\") pod \"installer-9-crc\" (UID: \"2217860c-1b72-4728-9f27-d13f66cd5e7b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 09:45:19 crc kubenswrapper[4760]: I0226 09:45:19.843389 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2217860c-1b72-4728-9f27-d13f66cd5e7b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2217860c-1b72-4728-9f27-d13f66cd5e7b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 09:45:19 crc kubenswrapper[4760]: I0226 09:45:19.843465 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2217860c-1b72-4728-9f27-d13f66cd5e7b-var-lock\") pod \"installer-9-crc\" (UID: \"2217860c-1b72-4728-9f27-d13f66cd5e7b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 09:45:19 crc kubenswrapper[4760]: I0226 09:45:19.843558 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2217860c-1b72-4728-9f27-d13f66cd5e7b-var-lock\") pod \"installer-9-crc\" (UID: \"2217860c-1b72-4728-9f27-d13f66cd5e7b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 09:45:19 crc kubenswrapper[4760]: I0226 09:45:19.843641 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2217860c-1b72-4728-9f27-d13f66cd5e7b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2217860c-1b72-4728-9f27-d13f66cd5e7b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 09:45:19 crc kubenswrapper[4760]: I0226 09:45:19.874390 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2217860c-1b72-4728-9f27-d13f66cd5e7b-kube-api-access\") pod \"installer-9-crc\" (UID: \"2217860c-1b72-4728-9f27-d13f66cd5e7b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 09:45:20 crc kubenswrapper[4760]: I0226 09:45:20.054281 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 26 09:45:21 crc kubenswrapper[4760]: E0226 09:45:21.090316 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b is running failed: container process not found" containerID="acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 26 09:45:21 crc kubenswrapper[4760]: E0226 09:45:21.091373 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b is running failed: container process not found" containerID="acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 26 09:45:21 crc kubenswrapper[4760]: E0226 09:45:21.091920 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b is running failed: container process not found" containerID="acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 26 09:45:21 crc kubenswrapper[4760]: E0226 09:45:21.092040 4760 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" podUID="cd519bc0-6b98-495a-bc74-e515b87ec6c1" containerName="kube-multus-additional-cni-plugins" Feb 26 09:45:28 crc kubenswrapper[4760]: I0226 09:45:28.641736 4760 patch_prober.go:28] interesting pod/route-controller-manager-5dbdf696cf-whkj6 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.45:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:45:28 crc kubenswrapper[4760]: I0226 09:45:28.642176 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6" podUID="62986796-95a2-4ea9-b0ea-e6156ecae439" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.45:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:45:29 crc kubenswrapper[4760]: I0226 09:45:29.630926 4760 patch_prober.go:28] interesting pod/controller-manager-b98cb7f9b-xfvpn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" start-of-body= Feb 26 09:45:29 crc kubenswrapper[4760]: I0226 09:45:29.631016 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" podUID="ac18d765-3a28-4da9-8823-fadbdad35b1d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" Feb 26 09:45:29 crc kubenswrapper[4760]: I0226 09:45:29.643695 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-6v588 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 26 09:45:29 crc kubenswrapper[4760]: I0226 09:45:29.643801 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6v588" podUID="de95d7ed-3895-43a6-b422-caae1114b0ec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 26 09:45:31 crc kubenswrapper[4760]: E0226 09:45:31.089903 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b is running failed: container process not found" containerID="acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 26 09:45:31 crc kubenswrapper[4760]: E0226 09:45:31.090417 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b is running failed: container process not found" containerID="acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 26 09:45:31 crc kubenswrapper[4760]: E0226 09:45:31.090797 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b is running failed: container process not found" containerID="acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 26 09:45:31 crc kubenswrapper[4760]: E0226 09:45:31.090851 4760 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" podUID="cd519bc0-6b98-495a-bc74-e515b87ec6c1" containerName="kube-multus-additional-cni-plugins" Feb 26 09:45:38 crc kubenswrapper[4760]: I0226 09:45:38.641653 4760 patch_prober.go:28] interesting pod/route-controller-manager-5dbdf696cf-whkj6 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.45:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:45:38 crc kubenswrapper[4760]: I0226 09:45:38.642213 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6" podUID="62986796-95a2-4ea9-b0ea-e6156ecae439" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.45:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.311528 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.318091 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6" Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.340881 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62986796-95a2-4ea9-b0ea-e6156ecae439-serving-cert\") pod \"62986796-95a2-4ea9-b0ea-e6156ecae439\" (UID: \"62986796-95a2-4ea9-b0ea-e6156ecae439\") " Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.340957 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1551e68-c39d-4fd3-af08-016df3350106-kube-api-access\") pod \"e1551e68-c39d-4fd3-af08-016df3350106\" (UID: \"e1551e68-c39d-4fd3-af08-016df3350106\") " Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.340994 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64g8t\" (UniqueName: \"kubernetes.io/projected/62986796-95a2-4ea9-b0ea-e6156ecae439-kube-api-access-64g8t\") pod \"62986796-95a2-4ea9-b0ea-e6156ecae439\" (UID: \"62986796-95a2-4ea9-b0ea-e6156ecae439\") " Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.351746 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62986796-95a2-4ea9-b0ea-e6156ecae439-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "62986796-95a2-4ea9-b0ea-e6156ecae439" (UID: "62986796-95a2-4ea9-b0ea-e6156ecae439"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.353888 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1551e68-c39d-4fd3-af08-016df3350106-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e1551e68-c39d-4fd3-af08-016df3350106" (UID: "e1551e68-c39d-4fd3-af08-016df3350106"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.361194 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62986796-95a2-4ea9-b0ea-e6156ecae439-kube-api-access-64g8t" (OuterVolumeSpecName: "kube-api-access-64g8t") pod "62986796-95a2-4ea9-b0ea-e6156ecae439" (UID: "62986796-95a2-4ea9-b0ea-e6156ecae439"). InnerVolumeSpecName "kube-api-access-64g8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.442165 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e1551e68-c39d-4fd3-af08-016df3350106-kubelet-dir\") pod \"e1551e68-c39d-4fd3-af08-016df3350106\" (UID: \"e1551e68-c39d-4fd3-af08-016df3350106\") " Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.442259 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62986796-95a2-4ea9-b0ea-e6156ecae439-client-ca\") pod \"62986796-95a2-4ea9-b0ea-e6156ecae439\" (UID: \"62986796-95a2-4ea9-b0ea-e6156ecae439\") " Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.442283 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62986796-95a2-4ea9-b0ea-e6156ecae439-config\") pod \"62986796-95a2-4ea9-b0ea-e6156ecae439\" (UID: \"62986796-95a2-4ea9-b0ea-e6156ecae439\") " Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.442314 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1551e68-c39d-4fd3-af08-016df3350106-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e1551e68-c39d-4fd3-af08-016df3350106" (UID: "e1551e68-c39d-4fd3-af08-016df3350106"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.442464 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62986796-95a2-4ea9-b0ea-e6156ecae439-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.442477 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1551e68-c39d-4fd3-af08-016df3350106-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.442489 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64g8t\" (UniqueName: \"kubernetes.io/projected/62986796-95a2-4ea9-b0ea-e6156ecae439-kube-api-access-64g8t\") on node \"crc\" DevicePath \"\"" Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.442497 4760 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e1551e68-c39d-4fd3-af08-016df3350106-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.442808 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62986796-95a2-4ea9-b0ea-e6156ecae439-client-ca" (OuterVolumeSpecName: "client-ca") pod "62986796-95a2-4ea9-b0ea-e6156ecae439" (UID: "62986796-95a2-4ea9-b0ea-e6156ecae439"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.442858 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62986796-95a2-4ea9-b0ea-e6156ecae439-config" (OuterVolumeSpecName: "config") pod "62986796-95a2-4ea9-b0ea-e6156ecae439" (UID: "62986796-95a2-4ea9-b0ea-e6156ecae439"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.543409 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62986796-95a2-4ea9-b0ea-e6156ecae439-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.543450 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62986796-95a2-4ea9-b0ea-e6156ecae439-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.630336 4760 patch_prober.go:28] interesting pod/controller-manager-b98cb7f9b-xfvpn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" start-of-body= Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.630452 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" podUID="ac18d765-3a28-4da9-8823-fadbdad35b1d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.643046 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-6v588 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.643132 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6v588" podUID="de95d7ed-3895-43a6-b422-caae1114b0ec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.765567 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e1551e68-c39d-4fd3-af08-016df3350106","Type":"ContainerDied","Data":"6e64fce70631a36a77adace88a8c5e367dd3089c28f3ff2373c50d2f8de85e79"} Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.765628 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.765639 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e64fce70631a36a77adace88a8c5e367dd3089c28f3ff2373c50d2f8de85e79" Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.767234 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6" event={"ID":"62986796-95a2-4ea9-b0ea-e6156ecae439","Type":"ContainerDied","Data":"b5e3d1cc0786874ab0c4677c2026531ec918a8ad584b9e78ad5b9c02f06355fb"} Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.767274 4760 scope.go:117] "RemoveContainer" containerID="bf7b02bc3c30c6c6789f7ea3ce28c1c4328cfb14ee15e7deb2786842904572bc" Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.767329 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6" Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.806622 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6"] Feb 26 09:45:39 crc kubenswrapper[4760]: I0226 09:45:39.817669 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dbdf696cf-whkj6"] Feb 26 09:45:40 crc kubenswrapper[4760]: I0226 09:45:40.585560 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62986796-95a2-4ea9-b0ea-e6156ecae439" path="/var/lib/kubelet/pods/62986796-95a2-4ea9-b0ea-e6156ecae439/volumes" Feb 26 09:45:41 crc kubenswrapper[4760]: E0226 09:45:41.089887 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b is running failed: container process not found" containerID="acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 26 09:45:41 crc kubenswrapper[4760]: E0226 09:45:41.090320 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b is running failed: container process not found" containerID="acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 26 09:45:41 crc kubenswrapper[4760]: E0226 09:45:41.090638 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b is running failed: container process not found" containerID="acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 26 09:45:41 crc kubenswrapper[4760]: E0226 09:45:41.090712 4760 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" podUID="cd519bc0-6b98-495a-bc74-e515b87ec6c1" containerName="kube-multus-additional-cni-plugins" Feb 26 09:45:41 crc kubenswrapper[4760]: I0226 09:45:41.405935 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs"] Feb 26 09:45:41 crc kubenswrapper[4760]: E0226 09:45:41.406140 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62986796-95a2-4ea9-b0ea-e6156ecae439" containerName="route-controller-manager" Feb 26 09:45:41 crc kubenswrapper[4760]: I0226 09:45:41.406155 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="62986796-95a2-4ea9-b0ea-e6156ecae439" containerName="route-controller-manager" Feb 26 09:45:41 crc kubenswrapper[4760]: E0226 09:45:41.406170 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1551e68-c39d-4fd3-af08-016df3350106" containerName="pruner" Feb 26 09:45:41 crc kubenswrapper[4760]: I0226 09:45:41.406178 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1551e68-c39d-4fd3-af08-016df3350106" containerName="pruner" Feb 26 09:45:41 crc kubenswrapper[4760]: I0226 09:45:41.406288 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="62986796-95a2-4ea9-b0ea-e6156ecae439" containerName="route-controller-manager" Feb 26 09:45:41 crc kubenswrapper[4760]: I0226 09:45:41.406301 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1551e68-c39d-4fd3-af08-016df3350106" containerName="pruner" Feb 26 09:45:41 crc kubenswrapper[4760]: I0226 09:45:41.406653 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs" Feb 26 09:45:41 crc kubenswrapper[4760]: I0226 09:45:41.412101 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 09:45:41 crc kubenswrapper[4760]: I0226 09:45:41.412526 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 09:45:41 crc kubenswrapper[4760]: I0226 09:45:41.412699 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 09:45:41 crc kubenswrapper[4760]: I0226 09:45:41.412866 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 09:45:41 crc kubenswrapper[4760]: I0226 09:45:41.412990 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 09:45:41 crc kubenswrapper[4760]: I0226 09:45:41.413107 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 09:45:41 crc kubenswrapper[4760]: I0226 09:45:41.441970 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs"] Feb 26 09:45:41 crc kubenswrapper[4760]: I0226 09:45:41.572898 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2kvp\" (UniqueName: \"kubernetes.io/projected/6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3-kube-api-access-h2kvp\") pod \"route-controller-manager-b44b844dd-6cjzs\" (UID: \"6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3\") " pod="openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs" Feb 26 09:45:41 crc kubenswrapper[4760]: I0226 09:45:41.572961 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3-client-ca\") pod \"route-controller-manager-b44b844dd-6cjzs\" (UID: \"6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3\") " pod="openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs" Feb 26 09:45:41 crc kubenswrapper[4760]: I0226 09:45:41.573003 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3-serving-cert\") pod \"route-controller-manager-b44b844dd-6cjzs\" (UID: \"6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3\") " pod="openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs" Feb 26 09:45:41 crc kubenswrapper[4760]: I0226 09:45:41.573040 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3-config\") pod \"route-controller-manager-b44b844dd-6cjzs\" (UID: \"6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3\") " pod="openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs" Feb 26 09:45:41 crc kubenswrapper[4760]: I0226 09:45:41.674999 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2kvp\" (UniqueName: \"kubernetes.io/projected/6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3-kube-api-access-h2kvp\") pod \"route-controller-manager-b44b844dd-6cjzs\" (UID: \"6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3\") " pod="openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs" Feb 26 09:45:41 crc kubenswrapper[4760]: I0226 09:45:41.675148 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3-client-ca\") pod \"route-controller-manager-b44b844dd-6cjzs\" (UID: \"6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3\") " pod="openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs" Feb 26 09:45:41 crc kubenswrapper[4760]: I0226 09:45:41.675199 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3-serving-cert\") pod \"route-controller-manager-b44b844dd-6cjzs\" (UID: \"6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3\") " pod="openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs" Feb 26 09:45:41 crc kubenswrapper[4760]: I0226 09:45:41.675234 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3-config\") pod \"route-controller-manager-b44b844dd-6cjzs\" (UID: \"6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3\") " pod="openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs" Feb 26 09:45:41 crc kubenswrapper[4760]: I0226 09:45:41.677004 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3-config\") pod \"route-controller-manager-b44b844dd-6cjzs\" (UID: \"6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3\") " pod="openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs" Feb 26 09:45:41 crc kubenswrapper[4760]: I0226 09:45:41.677409 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3-client-ca\") pod \"route-controller-manager-b44b844dd-6cjzs\" (UID: \"6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3\") " pod="openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs" Feb 26 09:45:41 crc kubenswrapper[4760]: I0226 09:45:41.684807 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3-serving-cert\") pod \"route-controller-manager-b44b844dd-6cjzs\" (UID: \"6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3\") " pod="openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs" Feb 26 09:45:41 crc kubenswrapper[4760]: I0226 09:45:41.693385 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2kvp\" (UniqueName: \"kubernetes.io/projected/6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3-kube-api-access-h2kvp\") pod \"route-controller-manager-b44b844dd-6cjzs\" (UID: \"6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3\") " pod="openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs" Feb 26 09:45:41 crc kubenswrapper[4760]: I0226 09:45:41.986229 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs" Feb 26 09:45:48 crc kubenswrapper[4760]: I0226 09:45:48.493227 4760 ???:1] "http: TLS handshake error from 192.168.126.11:42506: no serving certificate available for the kubelet" Feb 26 09:45:49 crc kubenswrapper[4760]: I0226 09:45:49.643028 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-6v588 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 26 09:45:49 crc kubenswrapper[4760]: I0226 09:45:49.643104 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6v588" podUID="de95d7ed-3895-43a6-b422-caae1114b0ec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 26 09:45:50 crc kubenswrapper[4760]: I0226 09:45:50.630519 4760 patch_prober.go:28] interesting pod/controller-manager-b98cb7f9b-xfvpn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.50:8443/healthz\": context deadline exceeded" start-of-body= Feb 26 09:45:50 crc kubenswrapper[4760]: I0226 09:45:50.631053 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" podUID="ac18d765-3a28-4da9-8823-fadbdad35b1d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.50:8443/healthz\": context deadline exceeded" Feb 26 09:45:51 crc kubenswrapper[4760]: E0226 09:45:51.090256 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b is running failed: container process not found" containerID="acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 26 09:45:51 crc kubenswrapper[4760]: E0226 09:45:51.091052 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b is running failed: container process not found" containerID="acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 26 09:45:51 crc kubenswrapper[4760]: E0226 09:45:51.091420 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b is running failed: container process not found" containerID="acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 26 09:45:51 crc kubenswrapper[4760]: E0226 09:45:51.091489 4760 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" podUID="cd519bc0-6b98-495a-bc74-e515b87ec6c1" containerName="kube-multus-additional-cni-plugins" Feb 26 09:45:53 crc kubenswrapper[4760]: E0226 09:45:53.222765 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 26 09:45:53 crc kubenswrapper[4760]: E0226 09:45:53.222979 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lc9sx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-jmvz4_openshift-marketplace(6ee6a724-49ab-489e-84b5-cc2f96c89dc2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 09:45:53 crc kubenswrapper[4760]: E0226 09:45:53.224612 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-jmvz4" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" Feb 26 09:45:54 crc kubenswrapper[4760]: E0226 09:45:54.722243 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-jmvz4" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" Feb 26 09:45:54 crc kubenswrapper[4760]: E0226 09:45:54.786810 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 26 09:45:54 crc kubenswrapper[4760]: E0226 09:45:54.786959 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zlfdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-j58zh_openshift-marketplace(d5f41609-3893-4649-be8b-2a3c839f082a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 09:45:54 crc kubenswrapper[4760]: E0226 09:45:54.788706 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-j58zh" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" Feb 26 09:45:54 crc kubenswrapper[4760]: E0226 09:45:54.832300 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 26 09:45:54 crc kubenswrapper[4760]: E0226 09:45:54.832465 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tbbdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-hvl2n_openshift-marketplace(7427c503-5c81-488e-b0f0-61b2537a96a4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 09:45:54 crc kubenswrapper[4760]: E0226 09:45:54.833805 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-hvl2n" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" Feb 26 09:45:56 crc kubenswrapper[4760]: E0226 09:45:56.474034 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-hvl2n" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" Feb 26 09:45:56 crc kubenswrapper[4760]: E0226 09:45:56.478532 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-j58zh" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" Feb 26 09:45:56 crc kubenswrapper[4760]: E0226 09:45:56.546272 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 26 09:45:56 crc kubenswrapper[4760]: E0226 09:45:56.546419 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prsb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-g8gj5_openshift-marketplace(bedbd455-baad-4b56-86b7-1d851407744b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 09:45:56 crc kubenswrapper[4760]: E0226 09:45:56.547867 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-g8gj5" podUID="bedbd455-baad-4b56-86b7-1d851407744b" Feb 26 09:45:57 crc kubenswrapper[4760]: E0226 09:45:57.697722 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-g8gj5" podUID="bedbd455-baad-4b56-86b7-1d851407744b" Feb 26 09:45:57 crc kubenswrapper[4760]: E0226 09:45:57.764059 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 26 09:45:57 crc kubenswrapper[4760]: E0226 09:45:57.764304 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cpqmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-5wz6v_openshift-marketplace(5b918bed-a785-4a4d-a784-0860bdbadadf): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 09:45:57 crc kubenswrapper[4760]: E0226 09:45:57.765772 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-5wz6v" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" Feb 26 09:45:57 crc kubenswrapper[4760]: I0226 09:45:57.804455 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" Feb 26 09:45:57 crc kubenswrapper[4760]: E0226 09:45:57.810255 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 26 09:45:57 crc kubenswrapper[4760]: E0226 09:45:57.810443 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jxlpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-pzmc2_openshift-marketplace(1e32cadf-ce42-42fd-85de-7cfd1fd43dea): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 09:45:57 crc kubenswrapper[4760]: E0226 09:45:57.811699 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-pzmc2" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" Feb 26 09:45:57 crc kubenswrapper[4760]: I0226 09:45:57.816674 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-dpdz4_cd519bc0-6b98-495a-bc74-e515b87ec6c1/kube-multus-additional-cni-plugins/0.log" Feb 26 09:45:57 crc kubenswrapper[4760]: I0226 09:45:57.817066 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" Feb 26 09:45:57 crc kubenswrapper[4760]: I0226 09:45:57.854169 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-9fb57c5c6-ch52h"] Feb 26 09:45:57 crc kubenswrapper[4760]: E0226 09:45:57.854531 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd519bc0-6b98-495a-bc74-e515b87ec6c1" containerName="kube-multus-additional-cni-plugins" Feb 26 09:45:57 crc kubenswrapper[4760]: I0226 09:45:57.854549 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd519bc0-6b98-495a-bc74-e515b87ec6c1" containerName="kube-multus-additional-cni-plugins" Feb 26 09:45:57 crc kubenswrapper[4760]: E0226 09:45:57.854561 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac18d765-3a28-4da9-8823-fadbdad35b1d" containerName="controller-manager" Feb 26 09:45:57 crc kubenswrapper[4760]: I0226 09:45:57.854586 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac18d765-3a28-4da9-8823-fadbdad35b1d" containerName="controller-manager" Feb 26 09:45:57 crc kubenswrapper[4760]: I0226 09:45:57.854734 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd519bc0-6b98-495a-bc74-e515b87ec6c1" containerName="kube-multus-additional-cni-plugins" Feb 26 09:45:57 crc kubenswrapper[4760]: I0226 09:45:57.854755 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac18d765-3a28-4da9-8823-fadbdad35b1d" containerName="controller-manager" Feb 26 09:45:57 crc kubenswrapper[4760]: I0226 09:45:57.855326 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9fb57c5c6-ch52h" Feb 26 09:45:57 crc kubenswrapper[4760]: I0226 09:45:57.862527 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9fb57c5c6-ch52h"] Feb 26 09:45:57 crc kubenswrapper[4760]: E0226 09:45:57.910692 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 26 09:45:57 crc kubenswrapper[4760]: E0226 09:45:57.910846 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g5ngq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-895t9_openshift-marketplace(919bb2ab-9fbf-4a58-835e-8348eebaf093): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 09:45:57 crc kubenswrapper[4760]: E0226 09:45:57.912111 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-895t9" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" Feb 26 09:45:57 crc kubenswrapper[4760]: I0226 09:45:57.916109 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cd519bc0-6b98-495a-bc74-e515b87ec6c1-tuning-conf-dir\") pod \"cd519bc0-6b98-495a-bc74-e515b87ec6c1\" (UID: \"cd519bc0-6b98-495a-bc74-e515b87ec6c1\") " Feb 26 09:45:57 crc kubenswrapper[4760]: I0226 09:45:57.916180 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac18d765-3a28-4da9-8823-fadbdad35b1d-serving-cert\") pod \"ac18d765-3a28-4da9-8823-fadbdad35b1d\" (UID: \"ac18d765-3a28-4da9-8823-fadbdad35b1d\") " Feb 26 09:45:57 crc kubenswrapper[4760]: I0226 09:45:57.916204 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84tvs\" (UniqueName: \"kubernetes.io/projected/cd519bc0-6b98-495a-bc74-e515b87ec6c1-kube-api-access-84tvs\") pod \"cd519bc0-6b98-495a-bc74-e515b87ec6c1\" (UID: \"cd519bc0-6b98-495a-bc74-e515b87ec6c1\") " Feb 26 09:45:57 crc kubenswrapper[4760]: I0226 09:45:57.916280 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ac18d765-3a28-4da9-8823-fadbdad35b1d-proxy-ca-bundles\") pod \"ac18d765-3a28-4da9-8823-fadbdad35b1d\" (UID: \"ac18d765-3a28-4da9-8823-fadbdad35b1d\") " Feb 26 09:45:57 crc kubenswrapper[4760]: I0226 09:45:57.916354 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac18d765-3a28-4da9-8823-fadbdad35b1d-config\") pod \"ac18d765-3a28-4da9-8823-fadbdad35b1d\" (UID: \"ac18d765-3a28-4da9-8823-fadbdad35b1d\") " Feb 26 09:45:57 crc kubenswrapper[4760]: I0226 09:45:57.916458 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac18d765-3a28-4da9-8823-fadbdad35b1d-client-ca\") pod \"ac18d765-3a28-4da9-8823-fadbdad35b1d\" (UID: \"ac18d765-3a28-4da9-8823-fadbdad35b1d\") " Feb 26 09:45:57 crc kubenswrapper[4760]: I0226 09:45:57.916506 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqcdq\" (UniqueName: \"kubernetes.io/projected/ac18d765-3a28-4da9-8823-fadbdad35b1d-kube-api-access-sqcdq\") pod \"ac18d765-3a28-4da9-8823-fadbdad35b1d\" (UID: \"ac18d765-3a28-4da9-8823-fadbdad35b1d\") " Feb 26 09:45:57 crc kubenswrapper[4760]: I0226 09:45:57.916688 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cd519bc0-6b98-495a-bc74-e515b87ec6c1-cni-sysctl-allowlist\") pod \"cd519bc0-6b98-495a-bc74-e515b87ec6c1\" (UID: \"cd519bc0-6b98-495a-bc74-e515b87ec6c1\") " Feb 26 09:45:57 crc kubenswrapper[4760]: I0226 09:45:57.916746 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/cd519bc0-6b98-495a-bc74-e515b87ec6c1-ready\") pod \"cd519bc0-6b98-495a-bc74-e515b87ec6c1\" (UID: \"cd519bc0-6b98-495a-bc74-e515b87ec6c1\") " Feb 26 09:45:57 crc kubenswrapper[4760]: I0226 09:45:57.917924 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd519bc0-6b98-495a-bc74-e515b87ec6c1-ready" (OuterVolumeSpecName: "ready") pod "cd519bc0-6b98-495a-bc74-e515b87ec6c1" (UID: "cd519bc0-6b98-495a-bc74-e515b87ec6c1"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:45:57 crc kubenswrapper[4760]: I0226 09:45:57.917985 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd519bc0-6b98-495a-bc74-e515b87ec6c1-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "cd519bc0-6b98-495a-bc74-e515b87ec6c1" (UID: "cd519bc0-6b98-495a-bc74-e515b87ec6c1"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 09:45:57 crc kubenswrapper[4760]: I0226 09:45:57.919331 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac18d765-3a28-4da9-8823-fadbdad35b1d-client-ca" (OuterVolumeSpecName: "client-ca") pod "ac18d765-3a28-4da9-8823-fadbdad35b1d" (UID: "ac18d765-3a28-4da9-8823-fadbdad35b1d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:45:57 crc kubenswrapper[4760]: I0226 09:45:57.919592 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac18d765-3a28-4da9-8823-fadbdad35b1d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ac18d765-3a28-4da9-8823-fadbdad35b1d" (UID: "ac18d765-3a28-4da9-8823-fadbdad35b1d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:45:57 crc kubenswrapper[4760]: I0226 09:45:57.919756 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac18d765-3a28-4da9-8823-fadbdad35b1d-config" (OuterVolumeSpecName: "config") pod "ac18d765-3a28-4da9-8823-fadbdad35b1d" (UID: "ac18d765-3a28-4da9-8823-fadbdad35b1d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:45:57 crc kubenswrapper[4760]: I0226 09:45:57.922508 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd519bc0-6b98-495a-bc74-e515b87ec6c1-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "cd519bc0-6b98-495a-bc74-e515b87ec6c1" (UID: "cd519bc0-6b98-495a-bc74-e515b87ec6c1"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:45:57 crc kubenswrapper[4760]: I0226 09:45:57.929343 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac18d765-3a28-4da9-8823-fadbdad35b1d-kube-api-access-sqcdq" (OuterVolumeSpecName: "kube-api-access-sqcdq") pod "ac18d765-3a28-4da9-8823-fadbdad35b1d" (UID: "ac18d765-3a28-4da9-8823-fadbdad35b1d"). InnerVolumeSpecName "kube-api-access-sqcdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:45:57 crc kubenswrapper[4760]: I0226 09:45:57.930180 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd519bc0-6b98-495a-bc74-e515b87ec6c1-kube-api-access-84tvs" (OuterVolumeSpecName: "kube-api-access-84tvs") pod "cd519bc0-6b98-495a-bc74-e515b87ec6c1" (UID: "cd519bc0-6b98-495a-bc74-e515b87ec6c1"). InnerVolumeSpecName "kube-api-access-84tvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:45:57 crc kubenswrapper[4760]: I0226 09:45:57.942684 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac18d765-3a28-4da9-8823-fadbdad35b1d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ac18d765-3a28-4da9-8823-fadbdad35b1d" (UID: "ac18d765-3a28-4da9-8823-fadbdad35b1d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:45:57 crc kubenswrapper[4760]: E0226 09:45:57.957732 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 26 09:45:57 crc kubenswrapper[4760]: E0226 09:45:57.957928 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6vxwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zzjzl_openshift-marketplace(3e598e10-dd81-4dce-ad36-a44df83ae7fd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 09:45:57 crc kubenswrapper[4760]: E0226 09:45:57.959158 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-zzjzl" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.018944 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdzw2\" (UniqueName: \"kubernetes.io/projected/2b15690d-3d20-4630-bbec-5a122f6cca9a-kube-api-access-zdzw2\") pod \"controller-manager-9fb57c5c6-ch52h\" (UID: \"2b15690d-3d20-4630-bbec-5a122f6cca9a\") " pod="openshift-controller-manager/controller-manager-9fb57c5c6-ch52h" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.019056 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b15690d-3d20-4630-bbec-5a122f6cca9a-serving-cert\") pod \"controller-manager-9fb57c5c6-ch52h\" (UID: \"2b15690d-3d20-4630-bbec-5a122f6cca9a\") " pod="openshift-controller-manager/controller-manager-9fb57c5c6-ch52h" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.019114 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b15690d-3d20-4630-bbec-5a122f6cca9a-config\") pod \"controller-manager-9fb57c5c6-ch52h\" (UID: \"2b15690d-3d20-4630-bbec-5a122f6cca9a\") " pod="openshift-controller-manager/controller-manager-9fb57c5c6-ch52h" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.019147 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b15690d-3d20-4630-bbec-5a122f6cca9a-proxy-ca-bundles\") pod \"controller-manager-9fb57c5c6-ch52h\" (UID: \"2b15690d-3d20-4630-bbec-5a122f6cca9a\") " pod="openshift-controller-manager/controller-manager-9fb57c5c6-ch52h" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.019305 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b15690d-3d20-4630-bbec-5a122f6cca9a-client-ca\") pod \"controller-manager-9fb57c5c6-ch52h\" (UID: \"2b15690d-3d20-4630-bbec-5a122f6cca9a\") " pod="openshift-controller-manager/controller-manager-9fb57c5c6-ch52h" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.019497 4760 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/cd519bc0-6b98-495a-bc74-e515b87ec6c1-ready\") on node \"crc\" DevicePath \"\"" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.019514 4760 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cd519bc0-6b98-495a-bc74-e515b87ec6c1-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.019528 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84tvs\" (UniqueName: \"kubernetes.io/projected/cd519bc0-6b98-495a-bc74-e515b87ec6c1-kube-api-access-84tvs\") on node \"crc\" DevicePath \"\"" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.019538 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac18d765-3a28-4da9-8823-fadbdad35b1d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.019549 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ac18d765-3a28-4da9-8823-fadbdad35b1d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.019561 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac18d765-3a28-4da9-8823-fadbdad35b1d-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.019587 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac18d765-3a28-4da9-8823-fadbdad35b1d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.019602 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqcdq\" (UniqueName: \"kubernetes.io/projected/ac18d765-3a28-4da9-8823-fadbdad35b1d-kube-api-access-sqcdq\") on node \"crc\" DevicePath \"\"" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.019610 4760 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cd519bc0-6b98-495a-bc74-e515b87ec6c1-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.121063 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b15690d-3d20-4630-bbec-5a122f6cca9a-config\") pod \"controller-manager-9fb57c5c6-ch52h\" (UID: \"2b15690d-3d20-4630-bbec-5a122f6cca9a\") " pod="openshift-controller-manager/controller-manager-9fb57c5c6-ch52h" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.121479 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b15690d-3d20-4630-bbec-5a122f6cca9a-proxy-ca-bundles\") pod \"controller-manager-9fb57c5c6-ch52h\" (UID: \"2b15690d-3d20-4630-bbec-5a122f6cca9a\") " pod="openshift-controller-manager/controller-manager-9fb57c5c6-ch52h" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.121525 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b15690d-3d20-4630-bbec-5a122f6cca9a-client-ca\") pod \"controller-manager-9fb57c5c6-ch52h\" (UID: \"2b15690d-3d20-4630-bbec-5a122f6cca9a\") " pod="openshift-controller-manager/controller-manager-9fb57c5c6-ch52h" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.121603 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdzw2\" (UniqueName: \"kubernetes.io/projected/2b15690d-3d20-4630-bbec-5a122f6cca9a-kube-api-access-zdzw2\") pod \"controller-manager-9fb57c5c6-ch52h\" (UID: \"2b15690d-3d20-4630-bbec-5a122f6cca9a\") " pod="openshift-controller-manager/controller-manager-9fb57c5c6-ch52h" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.121702 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b15690d-3d20-4630-bbec-5a122f6cca9a-serving-cert\") pod \"controller-manager-9fb57c5c6-ch52h\" (UID: \"2b15690d-3d20-4630-bbec-5a122f6cca9a\") " pod="openshift-controller-manager/controller-manager-9fb57c5c6-ch52h" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.122749 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b15690d-3d20-4630-bbec-5a122f6cca9a-client-ca\") pod \"controller-manager-9fb57c5c6-ch52h\" (UID: \"2b15690d-3d20-4630-bbec-5a122f6cca9a\") " pod="openshift-controller-manager/controller-manager-9fb57c5c6-ch52h" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.122760 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b15690d-3d20-4630-bbec-5a122f6cca9a-proxy-ca-bundles\") pod \"controller-manager-9fb57c5c6-ch52h\" (UID: \"2b15690d-3d20-4630-bbec-5a122f6cca9a\") " pod="openshift-controller-manager/controller-manager-9fb57c5c6-ch52h" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.122807 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b15690d-3d20-4630-bbec-5a122f6cca9a-config\") pod \"controller-manager-9fb57c5c6-ch52h\" (UID: \"2b15690d-3d20-4630-bbec-5a122f6cca9a\") " pod="openshift-controller-manager/controller-manager-9fb57c5c6-ch52h" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.130181 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b15690d-3d20-4630-bbec-5a122f6cca9a-serving-cert\") pod \"controller-manager-9fb57c5c6-ch52h\" (UID: \"2b15690d-3d20-4630-bbec-5a122f6cca9a\") " pod="openshift-controller-manager/controller-manager-9fb57c5c6-ch52h" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.137543 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdzw2\" (UniqueName: \"kubernetes.io/projected/2b15690d-3d20-4630-bbec-5a122f6cca9a-kube-api-access-zdzw2\") pod \"controller-manager-9fb57c5c6-ch52h\" (UID: \"2b15690d-3d20-4630-bbec-5a122f6cca9a\") " pod="openshift-controller-manager/controller-manager-9fb57c5c6-ch52h" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.202336 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9fb57c5c6-ch52h" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.251614 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs"] Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.257216 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29534985-6vbhg"] Feb 26 09:45:58 crc kubenswrapper[4760]: W0226 09:45:58.261480 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b5f14bb_ce98_44c7_ba98_2b55bfdefcf3.slice/crio-b3e33679762e4769b657e1dd95af181d8ee3b26a09164a26c81432c3ef1430c9 WatchSource:0}: Error finding container b3e33679762e4769b657e1dd95af181d8ee3b26a09164a26c81432c3ef1430c9: Status 404 returned error can't find the container with id b3e33679762e4769b657e1dd95af181d8ee3b26a09164a26c81432c3ef1430c9 Feb 26 09:45:58 crc kubenswrapper[4760]: W0226 09:45:58.267829 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc63f9ee9_ee43_4787_a79a_57125c9239a2.slice/crio-d31ba38a55c39024b1b0d180877badcd6b9dab39fb1eeb42c883e91e330dc4fe WatchSource:0}: Error finding container d31ba38a55c39024b1b0d180877badcd6b9dab39fb1eeb42c883e91e330dc4fe: Status 404 returned error can't find the container with id d31ba38a55c39024b1b0d180877badcd6b9dab39fb1eeb42c883e91e330dc4fe Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.324621 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.351613 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.385492 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2217860c-1b72-4728-9f27-d13f66cd5e7b","Type":"ContainerStarted","Data":"6a4189b40350b8a989f63cf6a7999f83e3f27bb136f2412e010610abf153be4d"} Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.387271 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6v588" event={"ID":"de95d7ed-3895-43a6-b422-caae1114b0ec","Type":"ContainerStarted","Data":"48f55205bdeea086ba57079b6a1ac29a06b7d437a35846736e065ba18737e18a"} Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.388010 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-6v588" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.388268 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-6v588 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.388384 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6v588" podUID="de95d7ed-3895-43a6-b422-caae1114b0ec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.389689 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f02c9c60-3424-47fc-ab87-23a591f3af5d","Type":"ContainerStarted","Data":"7da3347d872d6701018d705bb8f4a0e73a574b572eaa53ebc3e794f07f7f6033"} Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.398622 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-dpdz4_cd519bc0-6b98-495a-bc74-e515b87ec6c1/kube-multus-additional-cni-plugins/0.log" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.398727 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" event={"ID":"cd519bc0-6b98-495a-bc74-e515b87ec6c1","Type":"ContainerDied","Data":"a2b12a65872af7b5df387aeaf810fdd9ee7a27b82b1faf036474360fd9c4538b"} Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.398762 4760 scope.go:117] "RemoveContainer" containerID="acc1147fd1ac2aceab464a471d06da0f62aec8827ec9754878e25b6adecf227b" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.398852 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-dpdz4" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.401543 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs" event={"ID":"6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3","Type":"ContainerStarted","Data":"b3e33679762e4769b657e1dd95af181d8ee3b26a09164a26c81432c3ef1430c9"} Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.406230 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.406227 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn" event={"ID":"ac18d765-3a28-4da9-8823-fadbdad35b1d","Type":"ContainerDied","Data":"e9e50a6b97cc7f4e22e0dc508a5fbecdc28360c69caf1f97bc7cc2b37c982fc3"} Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.411697 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-6vbhg" event={"ID":"c63f9ee9-ee43-4787-a79a-57125c9239a2","Type":"ContainerStarted","Data":"d31ba38a55c39024b1b0d180877badcd6b9dab39fb1eeb42c883e91e330dc4fe"} Feb 26 09:45:58 crc kubenswrapper[4760]: E0226 09:45:58.419631 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-pzmc2" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" Feb 26 09:45:58 crc kubenswrapper[4760]: E0226 09:45:58.419707 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-895t9" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" Feb 26 09:45:58 crc kubenswrapper[4760]: E0226 09:45:58.419772 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-5wz6v" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" Feb 26 09:45:58 crc kubenswrapper[4760]: E0226 09:45:58.419807 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zzjzl" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.456254 4760 scope.go:117] "RemoveContainer" containerID="ce32c95348479c21e565f456918d9f8638b54e4c75942466e72c5b7fecbac4f3" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.484400 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn"] Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.504302 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-b98cb7f9b-xfvpn"] Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.539875 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-dpdz4"] Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.543631 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-dpdz4"] Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.601855 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac18d765-3a28-4da9-8823-fadbdad35b1d" path="/var/lib/kubelet/pods/ac18d765-3a28-4da9-8823-fadbdad35b1d/volumes" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.603526 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd519bc0-6b98-495a-bc74-e515b87ec6c1" path="/var/lib/kubelet/pods/cd519bc0-6b98-495a-bc74-e515b87ec6c1/volumes" Feb 26 09:45:58 crc kubenswrapper[4760]: I0226 09:45:58.677558 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9fb57c5c6-ch52h"] Feb 26 09:45:58 crc kubenswrapper[4760]: W0226 09:45:58.681870 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b15690d_3d20_4630_bbec_5a122f6cca9a.slice/crio-7120fdb08b2d7f6d317e18ea1e5cc6fa2a4891db1001010a2182de1908ae7352 WatchSource:0}: Error finding container 7120fdb08b2d7f6d317e18ea1e5cc6fa2a4891db1001010a2182de1908ae7352: Status 404 returned error can't find the container with id 7120fdb08b2d7f6d317e18ea1e5cc6fa2a4891db1001010a2182de1908ae7352 Feb 26 09:45:58 crc kubenswrapper[4760]: E0226 09:45:58.693939 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac18d765_3a28_4da9_8823_fadbdad35b1d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac18d765_3a28_4da9_8823_fadbdad35b1d.slice/crio-e9e50a6b97cc7f4e22e0dc508a5fbecdc28360c69caf1f97bc7cc2b37c982fc3\": RecentStats: unable to find data in memory cache]" Feb 26 09:45:59 crc kubenswrapper[4760]: I0226 09:45:59.419012 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs" event={"ID":"6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3","Type":"ContainerStarted","Data":"5f6b9bb8b4e74224460cbaac5481486198673b40691b6d9539ae1a16afc1f779"} Feb 26 09:45:59 crc kubenswrapper[4760]: I0226 09:45:59.419538 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs" Feb 26 09:45:59 crc kubenswrapper[4760]: I0226 09:45:59.421723 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f02c9c60-3424-47fc-ab87-23a591f3af5d","Type":"ContainerStarted","Data":"dc37d00d125d8e56bdff415ce128b8be64415bc856412d8841b24fc7f715e083"} Feb 26 09:45:59 crc kubenswrapper[4760]: I0226 09:45:59.422968 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9fb57c5c6-ch52h" event={"ID":"2b15690d-3d20-4630-bbec-5a122f6cca9a","Type":"ContainerStarted","Data":"71af3d91b17634ef98427d471cc1fa09f2d273cd68ee800ef3d51f4a1fbf6a16"} Feb 26 09:45:59 crc kubenswrapper[4760]: I0226 09:45:59.423017 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9fb57c5c6-ch52h" event={"ID":"2b15690d-3d20-4630-bbec-5a122f6cca9a","Type":"ContainerStarted","Data":"7120fdb08b2d7f6d317e18ea1e5cc6fa2a4891db1001010a2182de1908ae7352"} Feb 26 09:45:59 crc kubenswrapper[4760]: I0226 09:45:59.423777 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-9fb57c5c6-ch52h" Feb 26 09:45:59 crc kubenswrapper[4760]: I0226 09:45:59.425118 4760 generic.go:334] "Generic (PLEG): container finished" podID="c63f9ee9-ee43-4787-a79a-57125c9239a2" containerID="0a207ca86813a0d688fdd24747b49d6d85766bd4380b1e61f6f9834288b8f892" exitCode=0 Feb 26 09:45:59 crc kubenswrapper[4760]: I0226 09:45:59.425181 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-6vbhg" event={"ID":"c63f9ee9-ee43-4787-a79a-57125c9239a2","Type":"ContainerDied","Data":"0a207ca86813a0d688fdd24747b49d6d85766bd4380b1e61f6f9834288b8f892"} Feb 26 09:45:59 crc kubenswrapper[4760]: I0226 09:45:59.425510 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs" Feb 26 09:45:59 crc kubenswrapper[4760]: I0226 09:45:59.427334 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-6v588 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 26 09:45:59 crc kubenswrapper[4760]: I0226 09:45:59.427376 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6v588" podUID="de95d7ed-3895-43a6-b422-caae1114b0ec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 26 09:45:59 crc kubenswrapper[4760]: I0226 09:45:59.427673 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2217860c-1b72-4728-9f27-d13f66cd5e7b","Type":"ContainerStarted","Data":"ec2e398566dbe4bcf322170912f49e387415fc7c0348d7a61a9f19991f4badef"} Feb 26 09:45:59 crc kubenswrapper[4760]: I0226 09:45:59.428881 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-9fb57c5c6-ch52h" Feb 26 09:45:59 crc kubenswrapper[4760]: I0226 09:45:59.435318 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs" podStartSLOduration=45.435297433 podStartE2EDuration="45.435297433s" podCreationTimestamp="2026-02-26 09:45:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:45:59.432025396 +0000 UTC m=+204.565970879" watchObservedRunningTime="2026-02-26 09:45:59.435297433 +0000 UTC m=+204.569242926" Feb 26 09:45:59 crc kubenswrapper[4760]: I0226 09:45:59.467712 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=46.46768279 podStartE2EDuration="46.46768279s" podCreationTimestamp="2026-02-26 09:45:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:45:59.465318207 +0000 UTC m=+204.599263700" watchObservedRunningTime="2026-02-26 09:45:59.46768279 +0000 UTC m=+204.601628283" Feb 26 09:45:59 crc kubenswrapper[4760]: I0226 09:45:59.484498 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=40.484473924 podStartE2EDuration="40.484473924s" podCreationTimestamp="2026-02-26 09:45:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:45:59.480555931 +0000 UTC m=+204.614501424" watchObservedRunningTime="2026-02-26 09:45:59.484473924 +0000 UTC m=+204.618419417" Feb 26 09:45:59 crc kubenswrapper[4760]: I0226 09:45:59.509916 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-9fb57c5c6-ch52h" podStartSLOduration=45.509899037 podStartE2EDuration="45.509899037s" podCreationTimestamp="2026-02-26 09:45:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:45:59.504525085 +0000 UTC m=+204.638470578" watchObservedRunningTime="2026-02-26 09:45:59.509899037 +0000 UTC m=+204.643844530" Feb 26 09:45:59 crc kubenswrapper[4760]: I0226 09:45:59.683208 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-6v588 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 26 09:45:59 crc kubenswrapper[4760]: I0226 09:45:59.683262 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6v588" podUID="de95d7ed-3895-43a6-b422-caae1114b0ec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 26 09:45:59 crc kubenswrapper[4760]: I0226 09:45:59.683529 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-6v588 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 26 09:45:59 crc kubenswrapper[4760]: I0226 09:45:59.683657 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6v588" podUID="de95d7ed-3895-43a6-b422-caae1114b0ec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 26 09:46:00 crc kubenswrapper[4760]: I0226 09:46:00.304477 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534986-jrj4w"] Feb 26 09:46:00 crc kubenswrapper[4760]: I0226 09:46:00.305259 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" Feb 26 09:46:00 crc kubenswrapper[4760]: I0226 09:46:00.308220 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jn6zk" Feb 26 09:46:00 crc kubenswrapper[4760]: I0226 09:46:00.308221 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:46:00 crc kubenswrapper[4760]: I0226 09:46:00.309678 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:46:00 crc kubenswrapper[4760]: I0226 09:46:00.319838 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534986-jrj4w"] Feb 26 09:46:00 crc kubenswrapper[4760]: I0226 09:46:00.359201 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f2kp\" (UniqueName: \"kubernetes.io/projected/dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28-kube-api-access-6f2kp\") pod \"auto-csr-approver-29534986-jrj4w\" (UID: \"dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28\") " pod="openshift-infra/auto-csr-approver-29534986-jrj4w" Feb 26 09:46:00 crc kubenswrapper[4760]: I0226 09:46:00.434780 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-6v588 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 26 09:46:00 crc kubenswrapper[4760]: I0226 09:46:00.436096 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6v588" podUID="de95d7ed-3895-43a6-b422-caae1114b0ec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 26 09:46:00 crc kubenswrapper[4760]: I0226 09:46:00.460713 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f2kp\" (UniqueName: \"kubernetes.io/projected/dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28-kube-api-access-6f2kp\") pod \"auto-csr-approver-29534986-jrj4w\" (UID: \"dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28\") " pod="openshift-infra/auto-csr-approver-29534986-jrj4w" Feb 26 09:46:00 crc kubenswrapper[4760]: I0226 09:46:00.485732 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f2kp\" (UniqueName: \"kubernetes.io/projected/dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28-kube-api-access-6f2kp\") pod \"auto-csr-approver-29534986-jrj4w\" (UID: \"dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28\") " pod="openshift-infra/auto-csr-approver-29534986-jrj4w" Feb 26 09:46:00 crc kubenswrapper[4760]: I0226 09:46:00.619532 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" Feb 26 09:46:00 crc kubenswrapper[4760]: I0226 09:46:00.675102 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-6vbhg" Feb 26 09:46:00 crc kubenswrapper[4760]: I0226 09:46:00.842274 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534986-jrj4w"] Feb 26 09:46:00 crc kubenswrapper[4760]: I0226 09:46:00.864587 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c63f9ee9-ee43-4787-a79a-57125c9239a2-secret-volume\") pod \"c63f9ee9-ee43-4787-a79a-57125c9239a2\" (UID: \"c63f9ee9-ee43-4787-a79a-57125c9239a2\") " Feb 26 09:46:00 crc kubenswrapper[4760]: I0226 09:46:00.864661 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c63f9ee9-ee43-4787-a79a-57125c9239a2-config-volume\") pod \"c63f9ee9-ee43-4787-a79a-57125c9239a2\" (UID: \"c63f9ee9-ee43-4787-a79a-57125c9239a2\") " Feb 26 09:46:00 crc kubenswrapper[4760]: I0226 09:46:00.864701 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjjkf\" (UniqueName: \"kubernetes.io/projected/c63f9ee9-ee43-4787-a79a-57125c9239a2-kube-api-access-sjjkf\") pod \"c63f9ee9-ee43-4787-a79a-57125c9239a2\" (UID: \"c63f9ee9-ee43-4787-a79a-57125c9239a2\") " Feb 26 09:46:00 crc kubenswrapper[4760]: I0226 09:46:00.865945 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c63f9ee9-ee43-4787-a79a-57125c9239a2-config-volume" (OuterVolumeSpecName: "config-volume") pod "c63f9ee9-ee43-4787-a79a-57125c9239a2" (UID: "c63f9ee9-ee43-4787-a79a-57125c9239a2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:46:00 crc kubenswrapper[4760]: I0226 09:46:00.882820 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c63f9ee9-ee43-4787-a79a-57125c9239a2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c63f9ee9-ee43-4787-a79a-57125c9239a2" (UID: "c63f9ee9-ee43-4787-a79a-57125c9239a2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:46:00 crc kubenswrapper[4760]: I0226 09:46:00.883046 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c63f9ee9-ee43-4787-a79a-57125c9239a2-kube-api-access-sjjkf" (OuterVolumeSpecName: "kube-api-access-sjjkf") pod "c63f9ee9-ee43-4787-a79a-57125c9239a2" (UID: "c63f9ee9-ee43-4787-a79a-57125c9239a2"). InnerVolumeSpecName "kube-api-access-sjjkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:46:00 crc kubenswrapper[4760]: I0226 09:46:00.966009 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjjkf\" (UniqueName: \"kubernetes.io/projected/c63f9ee9-ee43-4787-a79a-57125c9239a2-kube-api-access-sjjkf\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:00 crc kubenswrapper[4760]: I0226 09:46:00.966055 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c63f9ee9-ee43-4787-a79a-57125c9239a2-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:00 crc kubenswrapper[4760]: I0226 09:46:00.966070 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c63f9ee9-ee43-4787-a79a-57125c9239a2-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:01 crc kubenswrapper[4760]: I0226 09:46:01.440418 4760 generic.go:334] "Generic (PLEG): container finished" podID="f02c9c60-3424-47fc-ab87-23a591f3af5d" containerID="dc37d00d125d8e56bdff415ce128b8be64415bc856412d8841b24fc7f715e083" exitCode=0 Feb 26 09:46:01 crc kubenswrapper[4760]: I0226 09:46:01.440479 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f02c9c60-3424-47fc-ab87-23a591f3af5d","Type":"ContainerDied","Data":"dc37d00d125d8e56bdff415ce128b8be64415bc856412d8841b24fc7f715e083"} Feb 26 09:46:01 crc kubenswrapper[4760]: I0226 09:46:01.448524 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-6vbhg" event={"ID":"c63f9ee9-ee43-4787-a79a-57125c9239a2","Type":"ContainerDied","Data":"d31ba38a55c39024b1b0d180877badcd6b9dab39fb1eeb42c883e91e330dc4fe"} Feb 26 09:46:01 crc kubenswrapper[4760]: I0226 09:46:01.448556 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29534985-6vbhg" Feb 26 09:46:01 crc kubenswrapper[4760]: I0226 09:46:01.448591 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d31ba38a55c39024b1b0d180877badcd6b9dab39fb1eeb42c883e91e330dc4fe" Feb 26 09:46:01 crc kubenswrapper[4760]: I0226 09:46:01.455554 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" event={"ID":"dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28","Type":"ContainerStarted","Data":"04082692dbcdbcf48949e9906bbb075b3ce2034743b7f317848c840de495202e"} Feb 26 09:46:02 crc kubenswrapper[4760]: I0226 09:46:02.699705 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 09:46:02 crc kubenswrapper[4760]: I0226 09:46:02.892906 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f02c9c60-3424-47fc-ab87-23a591f3af5d-kubelet-dir\") pod \"f02c9c60-3424-47fc-ab87-23a591f3af5d\" (UID: \"f02c9c60-3424-47fc-ab87-23a591f3af5d\") " Feb 26 09:46:02 crc kubenswrapper[4760]: I0226 09:46:02.892998 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f02c9c60-3424-47fc-ab87-23a591f3af5d-kube-api-access\") pod \"f02c9c60-3424-47fc-ab87-23a591f3af5d\" (UID: \"f02c9c60-3424-47fc-ab87-23a591f3af5d\") " Feb 26 09:46:02 crc kubenswrapper[4760]: I0226 09:46:02.893090 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f02c9c60-3424-47fc-ab87-23a591f3af5d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f02c9c60-3424-47fc-ab87-23a591f3af5d" (UID: "f02c9c60-3424-47fc-ab87-23a591f3af5d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 09:46:02 crc kubenswrapper[4760]: I0226 09:46:02.894022 4760 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f02c9c60-3424-47fc-ab87-23a591f3af5d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:02 crc kubenswrapper[4760]: I0226 09:46:02.896609 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f02c9c60-3424-47fc-ab87-23a591f3af5d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f02c9c60-3424-47fc-ab87-23a591f3af5d" (UID: "f02c9c60-3424-47fc-ab87-23a591f3af5d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:46:02 crc kubenswrapper[4760]: I0226 09:46:02.994620 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f02c9c60-3424-47fc-ab87-23a591f3af5d-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:03 crc kubenswrapper[4760]: I0226 09:46:03.470216 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f02c9c60-3424-47fc-ab87-23a591f3af5d","Type":"ContainerDied","Data":"7da3347d872d6701018d705bb8f4a0e73a574b572eaa53ebc3e794f07f7f6033"} Feb 26 09:46:03 crc kubenswrapper[4760]: I0226 09:46:03.470267 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7da3347d872d6701018d705bb8f4a0e73a574b572eaa53ebc3e794f07f7f6033" Feb 26 09:46:03 crc kubenswrapper[4760]: I0226 09:46:03.470341 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 09:46:09 crc kubenswrapper[4760]: I0226 09:46:09.642891 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-6v588 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 26 09:46:09 crc kubenswrapper[4760]: I0226 09:46:09.644200 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6v588" podUID="de95d7ed-3895-43a6-b422-caae1114b0ec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 26 09:46:09 crc kubenswrapper[4760]: I0226 09:46:09.642925 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-6v588 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 26 09:46:09 crc kubenswrapper[4760]: I0226 09:46:09.644565 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6v588" podUID="de95d7ed-3895-43a6-b422-caae1114b0ec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 26 09:46:14 crc kubenswrapper[4760]: I0226 09:46:14.325979 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-9fb57c5c6-ch52h"] Feb 26 09:46:14 crc kubenswrapper[4760]: I0226 09:46:14.326606 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-9fb57c5c6-ch52h" podUID="2b15690d-3d20-4630-bbec-5a122f6cca9a" containerName="controller-manager" containerID="cri-o://71af3d91b17634ef98427d471cc1fa09f2d273cd68ee800ef3d51f4a1fbf6a16" gracePeriod=30 Feb 26 09:46:14 crc kubenswrapper[4760]: I0226 09:46:14.416332 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs"] Feb 26 09:46:14 crc kubenswrapper[4760]: I0226 09:46:14.416555 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs" podUID="6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3" containerName="route-controller-manager" containerID="cri-o://5f6b9bb8b4e74224460cbaac5481486198673b40691b6d9539ae1a16afc1f779" gracePeriod=30 Feb 26 09:46:15 crc kubenswrapper[4760]: I0226 09:46:15.663032 4760 generic.go:334] "Generic (PLEG): container finished" podID="2b15690d-3d20-4630-bbec-5a122f6cca9a" containerID="71af3d91b17634ef98427d471cc1fa09f2d273cd68ee800ef3d51f4a1fbf6a16" exitCode=0 Feb 26 09:46:15 crc kubenswrapper[4760]: I0226 09:46:15.663346 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9fb57c5c6-ch52h" event={"ID":"2b15690d-3d20-4630-bbec-5a122f6cca9a","Type":"ContainerDied","Data":"71af3d91b17634ef98427d471cc1fa09f2d273cd68ee800ef3d51f4a1fbf6a16"} Feb 26 09:46:16 crc kubenswrapper[4760]: I0226 09:46:16.640283 4760 patch_prober.go:28] interesting pod/machine-config-daemon-2fsxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:46:16 crc kubenswrapper[4760]: I0226 09:46:16.640359 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" podUID="62f749b1-23a5-43f1-8568-b98b688944fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:46:16 crc kubenswrapper[4760]: I0226 09:46:16.670658 4760 generic.go:334] "Generic (PLEG): container finished" podID="6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3" containerID="5f6b9bb8b4e74224460cbaac5481486198673b40691b6d9539ae1a16afc1f779" exitCode=0 Feb 26 09:46:16 crc kubenswrapper[4760]: I0226 09:46:16.670740 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs" event={"ID":"6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3","Type":"ContainerDied","Data":"5f6b9bb8b4e74224460cbaac5481486198673b40691b6d9539ae1a16afc1f779"} Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.340525 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9fb57c5c6-ch52h" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.347886 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.369759 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv"] Feb 26 09:46:18 crc kubenswrapper[4760]: E0226 09:46:18.369995 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f02c9c60-3424-47fc-ab87-23a591f3af5d" containerName="pruner" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.370010 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f02c9c60-3424-47fc-ab87-23a591f3af5d" containerName="pruner" Feb 26 09:46:18 crc kubenswrapper[4760]: E0226 09:46:18.370037 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3" containerName="route-controller-manager" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.370045 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3" containerName="route-controller-manager" Feb 26 09:46:18 crc kubenswrapper[4760]: E0226 09:46:18.370055 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c63f9ee9-ee43-4787-a79a-57125c9239a2" containerName="collect-profiles" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.370063 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c63f9ee9-ee43-4787-a79a-57125c9239a2" containerName="collect-profiles" Feb 26 09:46:18 crc kubenswrapper[4760]: E0226 09:46:18.370073 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b15690d-3d20-4630-bbec-5a122f6cca9a" containerName="controller-manager" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.370081 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b15690d-3d20-4630-bbec-5a122f6cca9a" containerName="controller-manager" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.370190 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b15690d-3d20-4630-bbec-5a122f6cca9a" containerName="controller-manager" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.370204 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c63f9ee9-ee43-4787-a79a-57125c9239a2" containerName="collect-profiles" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.370215 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f02c9c60-3424-47fc-ab87-23a591f3af5d" containerName="pruner" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.370223 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3" containerName="route-controller-manager" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.370664 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.380093 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv"] Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.400425 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d70a973-5a18-4438-96cc-cc5393128039-config\") pod \"controller-manager-5ff4c4cbd8-snvhv\" (UID: \"8d70a973-5a18-4438-96cc-cc5393128039\") " pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.400509 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d70a973-5a18-4438-96cc-cc5393128039-client-ca\") pod \"controller-manager-5ff4c4cbd8-snvhv\" (UID: \"8d70a973-5a18-4438-96cc-cc5393128039\") " pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.400570 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d70a973-5a18-4438-96cc-cc5393128039-proxy-ca-bundles\") pod \"controller-manager-5ff4c4cbd8-snvhv\" (UID: \"8d70a973-5a18-4438-96cc-cc5393128039\") " pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.400649 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqcll\" (UniqueName: \"kubernetes.io/projected/8d70a973-5a18-4438-96cc-cc5393128039-kube-api-access-wqcll\") pod \"controller-manager-5ff4c4cbd8-snvhv\" (UID: \"8d70a973-5a18-4438-96cc-cc5393128039\") " pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.400935 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d70a973-5a18-4438-96cc-cc5393128039-serving-cert\") pod \"controller-manager-5ff4c4cbd8-snvhv\" (UID: \"8d70a973-5a18-4438-96cc-cc5393128039\") " pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.502811 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3-client-ca\") pod \"6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3\" (UID: \"6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3\") " Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.502890 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdzw2\" (UniqueName: \"kubernetes.io/projected/2b15690d-3d20-4630-bbec-5a122f6cca9a-kube-api-access-zdzw2\") pod \"2b15690d-3d20-4630-bbec-5a122f6cca9a\" (UID: \"2b15690d-3d20-4630-bbec-5a122f6cca9a\") " Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.502920 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b15690d-3d20-4630-bbec-5a122f6cca9a-proxy-ca-bundles\") pod \"2b15690d-3d20-4630-bbec-5a122f6cca9a\" (UID: \"2b15690d-3d20-4630-bbec-5a122f6cca9a\") " Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.502949 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b15690d-3d20-4630-bbec-5a122f6cca9a-config\") pod \"2b15690d-3d20-4630-bbec-5a122f6cca9a\" (UID: \"2b15690d-3d20-4630-bbec-5a122f6cca9a\") " Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.502981 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3-serving-cert\") pod \"6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3\" (UID: \"6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3\") " Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.503108 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2kvp\" (UniqueName: \"kubernetes.io/projected/6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3-kube-api-access-h2kvp\") pod \"6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3\" (UID: \"6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3\") " Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.503141 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3-config\") pod \"6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3\" (UID: \"6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3\") " Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.503171 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b15690d-3d20-4630-bbec-5a122f6cca9a-serving-cert\") pod \"2b15690d-3d20-4630-bbec-5a122f6cca9a\" (UID: \"2b15690d-3d20-4630-bbec-5a122f6cca9a\") " Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.503191 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b15690d-3d20-4630-bbec-5a122f6cca9a-client-ca\") pod \"2b15690d-3d20-4630-bbec-5a122f6cca9a\" (UID: \"2b15690d-3d20-4630-bbec-5a122f6cca9a\") " Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.503340 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d70a973-5a18-4438-96cc-cc5393128039-config\") pod \"controller-manager-5ff4c4cbd8-snvhv\" (UID: \"8d70a973-5a18-4438-96cc-cc5393128039\") " pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.503378 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d70a973-5a18-4438-96cc-cc5393128039-client-ca\") pod \"controller-manager-5ff4c4cbd8-snvhv\" (UID: \"8d70a973-5a18-4438-96cc-cc5393128039\") " pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.503435 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d70a973-5a18-4438-96cc-cc5393128039-proxy-ca-bundles\") pod \"controller-manager-5ff4c4cbd8-snvhv\" (UID: \"8d70a973-5a18-4438-96cc-cc5393128039\") " pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.503466 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqcll\" (UniqueName: \"kubernetes.io/projected/8d70a973-5a18-4438-96cc-cc5393128039-kube-api-access-wqcll\") pod \"controller-manager-5ff4c4cbd8-snvhv\" (UID: \"8d70a973-5a18-4438-96cc-cc5393128039\") " pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.503520 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d70a973-5a18-4438-96cc-cc5393128039-serving-cert\") pod \"controller-manager-5ff4c4cbd8-snvhv\" (UID: \"8d70a973-5a18-4438-96cc-cc5393128039\") " pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.504131 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3-client-ca" (OuterVolumeSpecName: "client-ca") pod "6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3" (UID: "6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.504131 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b15690d-3d20-4630-bbec-5a122f6cca9a-config" (OuterVolumeSpecName: "config") pod "2b15690d-3d20-4630-bbec-5a122f6cca9a" (UID: "2b15690d-3d20-4630-bbec-5a122f6cca9a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.504666 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b15690d-3d20-4630-bbec-5a122f6cca9a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2b15690d-3d20-4630-bbec-5a122f6cca9a" (UID: "2b15690d-3d20-4630-bbec-5a122f6cca9a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.505083 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d70a973-5a18-4438-96cc-cc5393128039-config\") pod \"controller-manager-5ff4c4cbd8-snvhv\" (UID: \"8d70a973-5a18-4438-96cc-cc5393128039\") " pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.505067 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b15690d-3d20-4630-bbec-5a122f6cca9a-client-ca" (OuterVolumeSpecName: "client-ca") pod "2b15690d-3d20-4630-bbec-5a122f6cca9a" (UID: "2b15690d-3d20-4630-bbec-5a122f6cca9a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.505269 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3-config" (OuterVolumeSpecName: "config") pod "6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3" (UID: "6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.505706 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d70a973-5a18-4438-96cc-cc5393128039-client-ca\") pod \"controller-manager-5ff4c4cbd8-snvhv\" (UID: \"8d70a973-5a18-4438-96cc-cc5393128039\") " pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.506175 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d70a973-5a18-4438-96cc-cc5393128039-proxy-ca-bundles\") pod \"controller-manager-5ff4c4cbd8-snvhv\" (UID: \"8d70a973-5a18-4438-96cc-cc5393128039\") " pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.509332 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3-kube-api-access-h2kvp" (OuterVolumeSpecName: "kube-api-access-h2kvp") pod "6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3" (UID: "6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3"). InnerVolumeSpecName "kube-api-access-h2kvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.509337 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3" (UID: "6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.509385 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b15690d-3d20-4630-bbec-5a122f6cca9a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2b15690d-3d20-4630-bbec-5a122f6cca9a" (UID: "2b15690d-3d20-4630-bbec-5a122f6cca9a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.509886 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b15690d-3d20-4630-bbec-5a122f6cca9a-kube-api-access-zdzw2" (OuterVolumeSpecName: "kube-api-access-zdzw2") pod "2b15690d-3d20-4630-bbec-5a122f6cca9a" (UID: "2b15690d-3d20-4630-bbec-5a122f6cca9a"). InnerVolumeSpecName "kube-api-access-zdzw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.510388 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d70a973-5a18-4438-96cc-cc5393128039-serving-cert\") pod \"controller-manager-5ff4c4cbd8-snvhv\" (UID: \"8d70a973-5a18-4438-96cc-cc5393128039\") " pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.521685 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqcll\" (UniqueName: \"kubernetes.io/projected/8d70a973-5a18-4438-96cc-cc5393128039-kube-api-access-wqcll\") pod \"controller-manager-5ff4c4cbd8-snvhv\" (UID: \"8d70a973-5a18-4438-96cc-cc5393128039\") " pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.604703 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.604744 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdzw2\" (UniqueName: \"kubernetes.io/projected/2b15690d-3d20-4630-bbec-5a122f6cca9a-kube-api-access-zdzw2\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.604758 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b15690d-3d20-4630-bbec-5a122f6cca9a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.604774 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b15690d-3d20-4630-bbec-5a122f6cca9a-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.604789 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.604801 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2kvp\" (UniqueName: \"kubernetes.io/projected/6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3-kube-api-access-h2kvp\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.604813 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.604826 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b15690d-3d20-4630-bbec-5a122f6cca9a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.604889 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b15690d-3d20-4630-bbec-5a122f6cca9a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.696910 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.808614 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9fb57c5c6-ch52h" event={"ID":"2b15690d-3d20-4630-bbec-5a122f6cca9a","Type":"ContainerDied","Data":"7120fdb08b2d7f6d317e18ea1e5cc6fa2a4891db1001010a2182de1908ae7352"} Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.809104 4760 scope.go:117] "RemoveContainer" containerID="71af3d91b17634ef98427d471cc1fa09f2d273cd68ee800ef3d51f4a1fbf6a16" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.808688 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9fb57c5c6-ch52h" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.810919 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs" event={"ID":"6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3","Type":"ContainerDied","Data":"b3e33679762e4769b657e1dd95af181d8ee3b26a09164a26c81432c3ef1430c9"} Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.811037 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs" Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.838896 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-9fb57c5c6-ch52h"] Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.846633 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-9fb57c5c6-ch52h"] Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.850318 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs"] Feb 26 09:46:18 crc kubenswrapper[4760]: I0226 09:46:18.853789 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b44b844dd-6cjzs"] Feb 26 09:46:19 crc kubenswrapper[4760]: I0226 09:46:19.203164 4760 patch_prober.go:28] interesting pod/controller-manager-9fb57c5c6-ch52h container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:46:19 crc kubenswrapper[4760]: I0226 09:46:19.203818 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-9fb57c5c6-ch52h" podUID="2b15690d-3d20-4630-bbec-5a122f6cca9a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:46:19 crc kubenswrapper[4760]: I0226 09:46:19.430812 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2tqr5"] Feb 26 09:46:19 crc kubenswrapper[4760]: I0226 09:46:19.657363 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-6v588" Feb 26 09:46:20 crc kubenswrapper[4760]: I0226 09:46:20.583296 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b15690d-3d20-4630-bbec-5a122f6cca9a" path="/var/lib/kubelet/pods/2b15690d-3d20-4630-bbec-5a122f6cca9a/volumes" Feb 26 09:46:20 crc kubenswrapper[4760]: I0226 09:46:20.584288 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3" path="/var/lib/kubelet/pods/6b5f14bb-ce98-44c7-ba98-2b55bfdefcf3/volumes" Feb 26 09:46:20 crc kubenswrapper[4760]: I0226 09:46:20.827958 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx"] Feb 26 09:46:20 crc kubenswrapper[4760]: I0226 09:46:20.830817 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" Feb 26 09:46:20 crc kubenswrapper[4760]: I0226 09:46:20.833685 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 09:46:20 crc kubenswrapper[4760]: I0226 09:46:20.833747 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 09:46:20 crc kubenswrapper[4760]: I0226 09:46:20.833899 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 09:46:20 crc kubenswrapper[4760]: I0226 09:46:20.833938 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 09:46:20 crc kubenswrapper[4760]: I0226 09:46:20.834119 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 09:46:20 crc kubenswrapper[4760]: I0226 09:46:20.834220 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 09:46:20 crc kubenswrapper[4760]: I0226 09:46:20.838695 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx"] Feb 26 09:46:20 crc kubenswrapper[4760]: I0226 09:46:20.932757 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr4hk\" (UniqueName: \"kubernetes.io/projected/e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2-kube-api-access-tr4hk\") pod \"route-controller-manager-588fbc8984-p5prx\" (UID: \"e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2\") " pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" Feb 26 09:46:20 crc kubenswrapper[4760]: I0226 09:46:20.932852 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2-client-ca\") pod \"route-controller-manager-588fbc8984-p5prx\" (UID: \"e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2\") " pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" Feb 26 09:46:20 crc kubenswrapper[4760]: I0226 09:46:20.932888 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2-serving-cert\") pod \"route-controller-manager-588fbc8984-p5prx\" (UID: \"e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2\") " pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" Feb 26 09:46:20 crc kubenswrapper[4760]: I0226 09:46:20.932908 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2-config\") pod \"route-controller-manager-588fbc8984-p5prx\" (UID: \"e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2\") " pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" Feb 26 09:46:21 crc kubenswrapper[4760]: I0226 09:46:21.033859 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr4hk\" (UniqueName: \"kubernetes.io/projected/e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2-kube-api-access-tr4hk\") pod \"route-controller-manager-588fbc8984-p5prx\" (UID: \"e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2\") " pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" Feb 26 09:46:21 crc kubenswrapper[4760]: I0226 09:46:21.033929 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2-client-ca\") pod \"route-controller-manager-588fbc8984-p5prx\" (UID: \"e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2\") " pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" Feb 26 09:46:21 crc kubenswrapper[4760]: I0226 09:46:21.033957 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2-serving-cert\") pod \"route-controller-manager-588fbc8984-p5prx\" (UID: \"e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2\") " pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" Feb 26 09:46:21 crc kubenswrapper[4760]: I0226 09:46:21.033972 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2-config\") pod \"route-controller-manager-588fbc8984-p5prx\" (UID: \"e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2\") " pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" Feb 26 09:46:21 crc kubenswrapper[4760]: I0226 09:46:21.035111 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2-config\") pod \"route-controller-manager-588fbc8984-p5prx\" (UID: \"e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2\") " pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" Feb 26 09:46:21 crc kubenswrapper[4760]: I0226 09:46:21.035964 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2-client-ca\") pod \"route-controller-manager-588fbc8984-p5prx\" (UID: \"e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2\") " pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" Feb 26 09:46:21 crc kubenswrapper[4760]: I0226 09:46:21.041269 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2-serving-cert\") pod \"route-controller-manager-588fbc8984-p5prx\" (UID: \"e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2\") " pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" Feb 26 09:46:21 crc kubenswrapper[4760]: I0226 09:46:21.056662 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr4hk\" (UniqueName: \"kubernetes.io/projected/e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2-kube-api-access-tr4hk\") pod \"route-controller-manager-588fbc8984-p5prx\" (UID: \"e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2\") " pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" Feb 26 09:46:21 crc kubenswrapper[4760]: I0226 09:46:21.161515 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" Feb 26 09:46:24 crc kubenswrapper[4760]: I0226 09:46:24.182565 4760 scope.go:117] "RemoveContainer" containerID="5f6b9bb8b4e74224460cbaac5481486198673b40691b6d9539ae1a16afc1f779" Feb 26 09:46:24 crc kubenswrapper[4760]: I0226 09:46:24.918460 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv"] Feb 26 09:46:25 crc kubenswrapper[4760]: I0226 09:46:25.219616 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx"] Feb 26 09:46:25 crc kubenswrapper[4760]: W0226 09:46:25.253780 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2a77cf4_474a_4ff1_b9dc_b5e20339a4a2.slice/crio-4cf5994c7d1c3347f0fe7dcb9a26f2690e51f12319efd78d57fc0e6e75307da3 WatchSource:0}: Error finding container 4cf5994c7d1c3347f0fe7dcb9a26f2690e51f12319efd78d57fc0e6e75307da3: Status 404 returned error can't find the container with id 4cf5994c7d1c3347f0fe7dcb9a26f2690e51f12319efd78d57fc0e6e75307da3 Feb 26 09:46:25 crc kubenswrapper[4760]: I0226 09:46:25.857892 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" event={"ID":"8d70a973-5a18-4438-96cc-cc5393128039","Type":"ContainerStarted","Data":"8702bb41cb7b5f29feb1693000bf9e986d1dedf0c64ff9f3365649c638b23f96"} Feb 26 09:46:25 crc kubenswrapper[4760]: I0226 09:46:25.859347 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" event={"ID":"e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2","Type":"ContainerStarted","Data":"4cf5994c7d1c3347f0fe7dcb9a26f2690e51f12319efd78d57fc0e6e75307da3"} Feb 26 09:46:26 crc kubenswrapper[4760]: I0226 09:46:26.865106 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" event={"ID":"8d70a973-5a18-4438-96cc-cc5393128039","Type":"ContainerStarted","Data":"bf92b383b495933c9f15e889f7c2f08b8be80474f9b03df647073dff52c658d8"} Feb 26 09:46:27 crc kubenswrapper[4760]: I0226 09:46:27.876800 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" event={"ID":"e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2","Type":"ContainerStarted","Data":"a7867f8360c806091e92b267a2bdf7dd5a7b166445a6640c25d9387486857e20"} Feb 26 09:46:27 crc kubenswrapper[4760]: I0226 09:46:27.877153 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" Feb 26 09:46:27 crc kubenswrapper[4760]: I0226 09:46:27.886755 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" Feb 26 09:46:27 crc kubenswrapper[4760]: I0226 09:46:27.898306 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" podStartSLOduration=13.898277208 podStartE2EDuration="13.898277208s" podCreationTimestamp="2026-02-26 09:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:46:27.896337757 +0000 UTC m=+233.030283270" watchObservedRunningTime="2026-02-26 09:46:27.898277208 +0000 UTC m=+233.032222701" Feb 26 09:46:27 crc kubenswrapper[4760]: I0226 09:46:27.934784 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" podStartSLOduration=13.934755274 podStartE2EDuration="13.934755274s" podCreationTimestamp="2026-02-26 09:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:46:27.926562887 +0000 UTC m=+233.060508380" watchObservedRunningTime="2026-02-26 09:46:27.934755274 +0000 UTC m=+233.068700767" Feb 26 09:46:28 crc kubenswrapper[4760]: E0226 09:46:28.021328 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 26 09:46:28 crc kubenswrapper[4760]: E0226 09:46:28.021488 4760 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 09:46:28 crc kubenswrapper[4760]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 26 09:46:28 crc kubenswrapper[4760]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6f2kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29534986-jrj4w_openshift-infra(dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Feb 26 09:46:28 crc kubenswrapper[4760]: > logger="UnhandledError" Feb 26 09:46:28 crc kubenswrapper[4760]: E0226 09:46:28.022727 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" Feb 26 09:46:28 crc kubenswrapper[4760]: I0226 09:46:28.882285 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" Feb 26 09:46:28 crc kubenswrapper[4760]: E0226 09:46:28.884938 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" Feb 26 09:46:28 crc kubenswrapper[4760]: I0226 09:46:28.888043 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" Feb 26 09:46:34 crc kubenswrapper[4760]: I0226 09:46:34.314604 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv"] Feb 26 09:46:34 crc kubenswrapper[4760]: I0226 09:46:34.315302 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" podUID="8d70a973-5a18-4438-96cc-cc5393128039" containerName="controller-manager" containerID="cri-o://bf92b383b495933c9f15e889f7c2f08b8be80474f9b03df647073dff52c658d8" gracePeriod=30 Feb 26 09:46:34 crc kubenswrapper[4760]: I0226 09:46:34.353329 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx"] Feb 26 09:46:34 crc kubenswrapper[4760]: I0226 09:46:34.353521 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" containerName="route-controller-manager" containerID="cri-o://a7867f8360c806091e92b267a2bdf7dd5a7b166445a6640c25d9387486857e20" gracePeriod=30 Feb 26 09:46:35 crc kubenswrapper[4760]: I0226 09:46:35.923555 4760 generic.go:334] "Generic (PLEG): container finished" podID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" containerID="a7867f8360c806091e92b267a2bdf7dd5a7b166445a6640c25d9387486857e20" exitCode=0 Feb 26 09:46:35 crc kubenswrapper[4760]: I0226 09:46:35.923692 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" event={"ID":"e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2","Type":"ContainerDied","Data":"a7867f8360c806091e92b267a2bdf7dd5a7b166445a6640c25d9387486857e20"} Feb 26 09:46:35 crc kubenswrapper[4760]: I0226 09:46:35.925758 4760 generic.go:334] "Generic (PLEG): container finished" podID="8d70a973-5a18-4438-96cc-cc5393128039" containerID="bf92b383b495933c9f15e889f7c2f08b8be80474f9b03df647073dff52c658d8" exitCode=0 Feb 26 09:46:35 crc kubenswrapper[4760]: I0226 09:46:35.925786 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" event={"ID":"8d70a973-5a18-4438-96cc-cc5393128039","Type":"ContainerDied","Data":"bf92b383b495933c9f15e889f7c2f08b8be80474f9b03df647073dff52c658d8"} Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.742818 4760 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.743920 4760 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.744092 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.744239 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://ab931c6ee89813eba42021c556459016bac7810a93a167b53e69c7b6705fc5c5" gracePeriod=15 Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.744355 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://3abb0dfbcfc7e859ea45ba5daf96d064ba260017ed48b5ba126c462e023fcf92" gracePeriod=15 Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.744408 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://41869c6aa4019c7a99928daadcc42b5e73f395a1e723ef5bb95cab3b460feaca" gracePeriod=15 Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.744468 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://7f873bfd1bde256f3ba8b460ae2aeab0e0ec82743932e5905a251070d7b77954" gracePeriod=15 Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.744585 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://57c36a2d93b08bc9ea526508ee3c821fdeaff9b07ec98694105d32ec96f2d82f" gracePeriod=15 Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.745759 4760 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 09:46:36 crc kubenswrapper[4760]: E0226 09:46:36.746069 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.746085 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 26 09:46:36 crc kubenswrapper[4760]: E0226 09:46:36.746102 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.746110 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 09:46:36 crc kubenswrapper[4760]: E0226 09:46:36.746119 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.746127 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 26 09:46:36 crc kubenswrapper[4760]: E0226 09:46:36.746136 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.746143 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 26 09:46:36 crc kubenswrapper[4760]: E0226 09:46:36.746153 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.746160 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 09:46:36 crc kubenswrapper[4760]: E0226 09:46:36.746170 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.746178 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 26 09:46:36 crc kubenswrapper[4760]: E0226 09:46:36.746186 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.746193 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 09:46:36 crc kubenswrapper[4760]: E0226 09:46:36.746203 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.746210 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 26 09:46:36 crc kubenswrapper[4760]: E0226 09:46:36.746226 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.746233 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.746359 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.746370 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.746380 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.746390 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.746401 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.746411 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.746421 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 26 09:46:36 crc kubenswrapper[4760]: E0226 09:46:36.746560 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.746585 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.746709 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.746947 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.846282 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.846335 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.846364 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.846437 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.846454 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.846470 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.846484 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.846597 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.947376 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.947433 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.947467 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.947538 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.947586 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.947618 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.947610 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.947696 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.947696 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.947610 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.947651 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.947652 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.947688 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.947690 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.947801 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 09:46:36 crc kubenswrapper[4760]: I0226 09:46:36.947857 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 09:46:37 crc kubenswrapper[4760]: I0226 09:46:37.937980 4760 generic.go:334] "Generic (PLEG): container finished" podID="2217860c-1b72-4728-9f27-d13f66cd5e7b" containerID="ec2e398566dbe4bcf322170912f49e387415fc7c0348d7a61a9f19991f4badef" exitCode=0 Feb 26 09:46:37 crc kubenswrapper[4760]: I0226 09:46:37.938075 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2217860c-1b72-4728-9f27-d13f66cd5e7b","Type":"ContainerDied","Data":"ec2e398566dbe4bcf322170912f49e387415fc7c0348d7a61a9f19991f4badef"} Feb 26 09:46:37 crc kubenswrapper[4760]: I0226 09:46:37.939257 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:37 crc kubenswrapper[4760]: I0226 09:46:37.939930 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:37 crc kubenswrapper[4760]: I0226 09:46:37.941039 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 09:46:37 crc kubenswrapper[4760]: I0226 09:46:37.942718 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 26 09:46:37 crc kubenswrapper[4760]: I0226 09:46:37.945519 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="57c36a2d93b08bc9ea526508ee3c821fdeaff9b07ec98694105d32ec96f2d82f" exitCode=0 Feb 26 09:46:37 crc kubenswrapper[4760]: I0226 09:46:37.945557 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3abb0dfbcfc7e859ea45ba5daf96d064ba260017ed48b5ba126c462e023fcf92" exitCode=0 Feb 26 09:46:37 crc kubenswrapper[4760]: I0226 09:46:37.945568 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="41869c6aa4019c7a99928daadcc42b5e73f395a1e723ef5bb95cab3b460feaca" exitCode=0 Feb 26 09:46:37 crc kubenswrapper[4760]: I0226 09:46:37.945611 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7f873bfd1bde256f3ba8b460ae2aeab0e0ec82743932e5905a251070d7b77954" exitCode=2 Feb 26 09:46:37 crc kubenswrapper[4760]: I0226 09:46:37.945633 4760 scope.go:117] "RemoveContainer" containerID="6a42004a8b808c4c7fbf7c8f2872c56e8a3de2367477d08143604816366a17b5" Feb 26 09:46:39 crc kubenswrapper[4760]: E0226 09:46:39.215496 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-conmon-ab931c6ee89813eba42021c556459016bac7810a93a167b53e69c7b6705fc5c5.scope\": RecentStats: unable to find data in memory cache]" Feb 26 09:46:39 crc kubenswrapper[4760]: I0226 09:46:39.698624 4760 patch_prober.go:28] interesting pod/controller-manager-5ff4c4cbd8-snvhv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 09:46:39 crc kubenswrapper[4760]: I0226 09:46:39.698746 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" podUID="8d70a973-5a18-4438-96cc-cc5393128039" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 09:46:39 crc kubenswrapper[4760]: E0226 09:46:39.699763 4760 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/events\": dial tcp 38.102.83.107:6443: connect: connection refused" event=< Feb 26 09:46:39 crc kubenswrapper[4760]: &Event{ObjectMeta:{controller-manager-5ff4c4cbd8-snvhv.1897c2d548cbaa14 openshift-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager,Name:controller-manager-5ff4c4cbd8-snvhv,UID:8d70a973-5a18-4438-96cc-cc5393128039,APIVersion:v1,ResourceVersion:29644,FieldPath:spec.containers{controller-manager},},Reason:ProbeError,Message:Readiness probe error: Get "https://10.217.0.63:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 26 09:46:39 crc kubenswrapper[4760]: body: Feb 26 09:46:39 crc kubenswrapper[4760]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:46:39.698709012 +0000 UTC m=+244.832654505,LastTimestamp:2026-02-26 09:46:39.698709012 +0000 UTC m=+244.832654505,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 09:46:39 crc kubenswrapper[4760]: > Feb 26 09:46:39 crc kubenswrapper[4760]: I0226 09:46:39.962813 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 26 09:46:39 crc kubenswrapper[4760]: I0226 09:46:39.964733 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ab931c6ee89813eba42021c556459016bac7810a93a167b53e69c7b6705fc5c5" exitCode=0 Feb 26 09:46:40 crc kubenswrapper[4760]: E0226 09:46:40.124478 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:40 crc kubenswrapper[4760]: E0226 09:46:40.125343 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:40 crc kubenswrapper[4760]: E0226 09:46:40.125899 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:40 crc kubenswrapper[4760]: E0226 09:46:40.126924 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:40 crc kubenswrapper[4760]: E0226 09:46:40.127525 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:40 crc kubenswrapper[4760]: I0226 09:46:40.127654 4760 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 26 09:46:40 crc kubenswrapper[4760]: E0226 09:46:40.128094 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="200ms" Feb 26 09:46:40 crc kubenswrapper[4760]: E0226 09:46:40.329674 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="400ms" Feb 26 09:46:40 crc kubenswrapper[4760]: E0226 09:46:40.731458 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="800ms" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.163019 4760 patch_prober.go:28] interesting pod/route-controller-manager-588fbc8984-p5prx container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" start-of-body= Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.163105 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.469081 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.469695 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.469911 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.473348 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.473758 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.473947 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.511375 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.512416 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.513803 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.514343 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.514899 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:41 crc kubenswrapper[4760]: E0226 09:46:41.532308 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="1.6s" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.577181 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.577777 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.578121 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.578663 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.610329 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.611312 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.611878 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.612201 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.612834 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.613157 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.613193 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d70a973-5a18-4438-96cc-cc5393128039-client-ca\") pod \"8d70a973-5a18-4438-96cc-cc5393128039\" (UID: \"8d70a973-5a18-4438-96cc-cc5393128039\") " Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.613265 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2217860c-1b72-4728-9f27-d13f66cd5e7b-kube-api-access\") pod \"2217860c-1b72-4728-9f27-d13f66cd5e7b\" (UID: \"2217860c-1b72-4728-9f27-d13f66cd5e7b\") " Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.613353 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d70a973-5a18-4438-96cc-cc5393128039-proxy-ca-bundles\") pod \"8d70a973-5a18-4438-96cc-cc5393128039\" (UID: \"8d70a973-5a18-4438-96cc-cc5393128039\") " Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.613392 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2217860c-1b72-4728-9f27-d13f66cd5e7b-var-lock\") pod \"2217860c-1b72-4728-9f27-d13f66cd5e7b\" (UID: \"2217860c-1b72-4728-9f27-d13f66cd5e7b\") " Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.613461 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqcll\" (UniqueName: \"kubernetes.io/projected/8d70a973-5a18-4438-96cc-cc5393128039-kube-api-access-wqcll\") pod \"8d70a973-5a18-4438-96cc-cc5393128039\" (UID: \"8d70a973-5a18-4438-96cc-cc5393128039\") " Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.613501 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.613532 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2217860c-1b72-4728-9f27-d13f66cd5e7b-kubelet-dir\") pod \"2217860c-1b72-4728-9f27-d13f66cd5e7b\" (UID: \"2217860c-1b72-4728-9f27-d13f66cd5e7b\") " Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.613611 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d70a973-5a18-4438-96cc-cc5393128039-config\") pod \"8d70a973-5a18-4438-96cc-cc5393128039\" (UID: \"8d70a973-5a18-4438-96cc-cc5393128039\") " Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.613670 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d70a973-5a18-4438-96cc-cc5393128039-serving-cert\") pod \"8d70a973-5a18-4438-96cc-cc5393128039\" (UID: \"8d70a973-5a18-4438-96cc-cc5393128039\") " Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.613704 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.613739 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.613668 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2217860c-1b72-4728-9f27-d13f66cd5e7b-var-lock" (OuterVolumeSpecName: "var-lock") pod "2217860c-1b72-4728-9f27-d13f66cd5e7b" (UID: "2217860c-1b72-4728-9f27-d13f66cd5e7b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.613690 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2217860c-1b72-4728-9f27-d13f66cd5e7b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2217860c-1b72-4728-9f27-d13f66cd5e7b" (UID: "2217860c-1b72-4728-9f27-d13f66cd5e7b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.613708 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.613907 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.613963 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.614270 4760 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.614283 4760 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.614294 4760 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2217860c-1b72-4728-9f27-d13f66cd5e7b-var-lock\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.614305 4760 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.614314 4760 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2217860c-1b72-4728-9f27-d13f66cd5e7b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.615106 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d70a973-5a18-4438-96cc-cc5393128039-client-ca" (OuterVolumeSpecName: "client-ca") pod "8d70a973-5a18-4438-96cc-cc5393128039" (UID: "8d70a973-5a18-4438-96cc-cc5393128039"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.615205 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d70a973-5a18-4438-96cc-cc5393128039-config" (OuterVolumeSpecName: "config") pod "8d70a973-5a18-4438-96cc-cc5393128039" (UID: "8d70a973-5a18-4438-96cc-cc5393128039"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.615762 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d70a973-5a18-4438-96cc-cc5393128039-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8d70a973-5a18-4438-96cc-cc5393128039" (UID: "8d70a973-5a18-4438-96cc-cc5393128039"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.619222 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d70a973-5a18-4438-96cc-cc5393128039-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8d70a973-5a18-4438-96cc-cc5393128039" (UID: "8d70a973-5a18-4438-96cc-cc5393128039"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.619236 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d70a973-5a18-4438-96cc-cc5393128039-kube-api-access-wqcll" (OuterVolumeSpecName: "kube-api-access-wqcll") pod "8d70a973-5a18-4438-96cc-cc5393128039" (UID: "8d70a973-5a18-4438-96cc-cc5393128039"). InnerVolumeSpecName "kube-api-access-wqcll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.619430 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2217860c-1b72-4728-9f27-d13f66cd5e7b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2217860c-1b72-4728-9f27-d13f66cd5e7b" (UID: "2217860c-1b72-4728-9f27-d13f66cd5e7b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.715077 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2-serving-cert\") pod \"e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2\" (UID: \"e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2\") " Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.715208 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2-client-ca\") pod \"e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2\" (UID: \"e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2\") " Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.715369 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr4hk\" (UniqueName: \"kubernetes.io/projected/e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2-kube-api-access-tr4hk\") pod \"e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2\" (UID: \"e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2\") " Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.715406 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2-config\") pod \"e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2\" (UID: \"e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2\") " Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.715820 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d70a973-5a18-4438-96cc-cc5393128039-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.715837 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d70a973-5a18-4438-96cc-cc5393128039-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.715847 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2217860c-1b72-4728-9f27-d13f66cd5e7b-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.715861 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d70a973-5a18-4438-96cc-cc5393128039-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.715871 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqcll\" (UniqueName: \"kubernetes.io/projected/8d70a973-5a18-4438-96cc-cc5393128039-kube-api-access-wqcll\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.715880 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d70a973-5a18-4438-96cc-cc5393128039-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.716211 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2-client-ca" (OuterVolumeSpecName: "client-ca") pod "e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" (UID: "e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.716682 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2-config" (OuterVolumeSpecName: "config") pod "e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" (UID: "e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.722307 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2-kube-api-access-tr4hk" (OuterVolumeSpecName: "kube-api-access-tr4hk") pod "e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" (UID: "e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2"). InnerVolumeSpecName "kube-api-access-tr4hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.727247 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" (UID: "e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:46:41 crc kubenswrapper[4760]: E0226 09:46:41.788142 4760 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.107:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.788765 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.817429 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr4hk\" (UniqueName: \"kubernetes.io/projected/e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2-kube-api-access-tr4hk\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.817487 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.817500 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.817512 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.994715 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.994760 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" event={"ID":"e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2","Type":"ContainerDied","Data":"4cf5994c7d1c3347f0fe7dcb9a26f2690e51f12319efd78d57fc0e6e75307da3"} Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.994819 4760 scope.go:117] "RemoveContainer" containerID="a7867f8360c806091e92b267a2bdf7dd5a7b166445a6640c25d9387486857e20" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.995404 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.995733 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.995929 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.996134 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:41 crc kubenswrapper[4760]: I0226 09:46:41.999708 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.004765 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ab6b94724db0af11fecad895bee2f73a0eb1a33edb703756ce63624bd960605f"} Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.013247 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.013609 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.013768 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.013916 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.014055 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.015679 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmvz4" event={"ID":"6ee6a724-49ab-489e-84b5-cc2f96c89dc2","Type":"ContainerStarted","Data":"38c1e30efef3235fb1a3ce151ba35ea14cfd838459687e1854eb5b082d6db2c0"} Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.016562 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.020770 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.021878 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.022194 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.022540 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.022819 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.025374 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-895t9" event={"ID":"919bb2ab-9fbf-4a58-835e-8348eebaf093","Type":"ContainerStarted","Data":"e50fcd05ea0665817db0eec200847ef47bd58537796dff8af9635bbe1e5fb73f"} Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.026820 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.027054 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.027264 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.027444 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.027658 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.027849 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.028022 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.036914 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8gj5" event={"ID":"bedbd455-baad-4b56-86b7-1d851407744b","Type":"ContainerStarted","Data":"425f0688ad8952585dd4a6730151512305b97af05184485f417b87b406b590aa"} Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.037765 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.037973 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.039225 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.039529 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.039747 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.039937 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.040248 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.041857 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.042820 4760 generic.go:334] "Generic (PLEG): container finished" podID="5b918bed-a785-4a4d-a784-0860bdbadadf" containerID="bda42931828591b1d15975ba85eda14070f7eb94619789f6e2d73301185e80a8" exitCode=0 Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.042863 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wz6v" event={"ID":"5b918bed-a785-4a4d-a784-0860bdbadadf","Type":"ContainerDied","Data":"bda42931828591b1d15975ba85eda14070f7eb94619789f6e2d73301185e80a8"} Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.043604 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.043860 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.044117 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.044777 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.045447 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.046016 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j58zh" event={"ID":"d5f41609-3893-4649-be8b-2a3c839f082a","Type":"ContainerStarted","Data":"495c8cc911207d0332df8c043aa14dc03b5b751a78976a691fbee843829a6cfd"} Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.046519 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.047375 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.047665 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.047910 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.048277 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.048487 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.048799 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.049068 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.049382 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.049884 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvl2n" event={"ID":"7427c503-5c81-488e-b0f0-61b2537a96a4","Type":"ContainerStarted","Data":"9f4f9adb9ce8c755ff5c812a71ec7588da16e7e0c0b5124e85b5ca9c50b7bedc"} Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.050404 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.050671 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.050908 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.051260 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.051509 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.054592 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.055246 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.055521 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.055715 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.055898 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.056042 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.056185 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.056337 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.056490 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.056681 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.056850 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.060854 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzjzl" event={"ID":"3e598e10-dd81-4dce-ad36-a44df83ae7fd","Type":"ContainerStarted","Data":"f367bc7c9b2544818752741b517756d6e5ac5d8e28fdde1f51f901dc11977312"} Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.062022 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.062232 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.062554 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.062883 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.063079 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.064147 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.065744 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.067240 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.068411 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.068957 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.069345 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.069960 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.073367 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pzmc2" event={"ID":"1e32cadf-ce42-42fd-85de-7cfd1fd43dea","Type":"ContainerStarted","Data":"8edf2df50e7cb27a084b8f74111a736461187405ce794854bbcf96af1064ce61"} Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.074229 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.074512 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.074731 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.074975 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.075666 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.076043 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.076233 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.076301 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.076241 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2217860c-1b72-4728-9f27-d13f66cd5e7b","Type":"ContainerDied","Data":"6a4189b40350b8a989f63cf6a7999f83e3f27bb136f2412e010610abf153be4d"} Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.076401 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a4189b40350b8a989f63cf6a7999f83e3f27bb136f2412e010610abf153be4d" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.076456 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.076696 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.076893 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.077082 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.077287 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.077746 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.084984 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.085660 4760 scope.go:117] "RemoveContainer" containerID="57c36a2d93b08bc9ea526508ee3c821fdeaff9b07ec98694105d32ec96f2d82f" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.085745 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.086645 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.086818 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.086962 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.087096 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.087317 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.087768 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.088092 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.088437 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.088604 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.088744 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.088905 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.088923 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" event={"ID":"8d70a973-5a18-4438-96cc-cc5393128039","Type":"ContainerDied","Data":"8702bb41cb7b5f29feb1693000bf9e986d1dedf0c64ff9f3365649c638b23f96"} Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.089003 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.089164 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.089446 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.089705 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.089839 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.090046 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.090190 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.090659 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.096310 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.115915 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.127468 4760 scope.go:117] "RemoveContainer" containerID="3abb0dfbcfc7e859ea45ba5daf96d064ba260017ed48b5ba126c462e023fcf92" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.137532 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.155466 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.175699 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.195326 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.216040 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.236059 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.364992 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.365266 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.365597 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.366126 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.366404 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.366695 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.376310 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.395762 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.406299 4760 scope.go:117] "RemoveContainer" containerID="41869c6aa4019c7a99928daadcc42b5e73f395a1e723ef5bb95cab3b460feaca" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.416309 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.431017 4760 scope.go:117] "RemoveContainer" containerID="7f873bfd1bde256f3ba8b460ae2aeab0e0ec82743932e5905a251070d7b77954" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.436129 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.456095 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.456433 4760 scope.go:117] "RemoveContainer" containerID="ab931c6ee89813eba42021c556459016bac7810a93a167b53e69c7b6705fc5c5" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.476470 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.478600 4760 scope.go:117] "RemoveContainer" containerID="a5e887362d4731b06c7ca639e3c1a69ae25e933cfc6bef5534cfa022ab97b09c" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.495863 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.515669 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.519641 4760 scope.go:117] "RemoveContainer" containerID="bf92b383b495933c9f15e889f7c2f08b8be80474f9b03df647073dff52c658d8" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.536004 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.556615 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.576235 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.593296 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.596296 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.616513 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.636079 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.656141 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.676502 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.696342 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.715769 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.735897 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:42 crc kubenswrapper[4760]: I0226 09:46:42.756840 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.096697 4760 generic.go:334] "Generic (PLEG): container finished" podID="919bb2ab-9fbf-4a58-835e-8348eebaf093" containerID="e50fcd05ea0665817db0eec200847ef47bd58537796dff8af9635bbe1e5fb73f" exitCode=0 Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.097223 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-895t9" event={"ID":"919bb2ab-9fbf-4a58-835e-8348eebaf093","Type":"ContainerDied","Data":"e50fcd05ea0665817db0eec200847ef47bd58537796dff8af9635bbe1e5fb73f"} Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.099059 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.099437 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.101101 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.101454 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.101801 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.102465 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.103184 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.103940 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.104561 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.105004 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.105317 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.105666 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.109891 4760 generic.go:334] "Generic (PLEG): container finished" podID="bedbd455-baad-4b56-86b7-1d851407744b" containerID="425f0688ad8952585dd4a6730151512305b97af05184485f417b87b406b590aa" exitCode=0 Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.109952 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8gj5" event={"ID":"bedbd455-baad-4b56-86b7-1d851407744b","Type":"ContainerDied","Data":"425f0688ad8952585dd4a6730151512305b97af05184485f417b87b406b590aa"} Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.111330 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.112388 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.112950 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.113293 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.113606 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.115469 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.115557 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" event={"ID":"dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28","Type":"ContainerStarted","Data":"cca58d2544314ed47085ddbb220223f9ff63b73a6c043d5baaca8e4c925da0a5"} Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.120189 4760 generic.go:334] "Generic (PLEG): container finished" podID="d5f41609-3893-4649-be8b-2a3c839f082a" containerID="495c8cc911207d0332df8c043aa14dc03b5b751a78976a691fbee843829a6cfd" exitCode=0 Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.120260 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j58zh" event={"ID":"d5f41609-3893-4649-be8b-2a3c839f082a","Type":"ContainerDied","Data":"495c8cc911207d0332df8c043aa14dc03b5b751a78976a691fbee843829a6cfd"} Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.127934 4760 generic.go:334] "Generic (PLEG): container finished" podID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" containerID="8edf2df50e7cb27a084b8f74111a736461187405ce794854bbcf96af1064ce61" exitCode=0 Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.127998 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pzmc2" event={"ID":"1e32cadf-ce42-42fd-85de-7cfd1fd43dea","Type":"ContainerDied","Data":"8edf2df50e7cb27a084b8f74111a736461187405ce794854bbcf96af1064ce61"} Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.132516 4760 generic.go:334] "Generic (PLEG): container finished" podID="7427c503-5c81-488e-b0f0-61b2537a96a4" containerID="9f4f9adb9ce8c755ff5c812a71ec7588da16e7e0c0b5124e85b5ca9c50b7bedc" exitCode=0 Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.132703 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvl2n" event={"ID":"7427c503-5c81-488e-b0f0-61b2537a96a4","Type":"ContainerDied","Data":"9f4f9adb9ce8c755ff5c812a71ec7588da16e7e0c0b5124e85b5ca9c50b7bedc"} Feb 26 09:46:43 crc kubenswrapper[4760]: E0226 09:46:43.134748 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="3.2s" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.135953 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"890867ef08188658861a8fc0f649ff679e2733c6ddc952f2dce618ca2c4af0e6"} Feb 26 09:46:43 crc kubenswrapper[4760]: E0226 09:46:43.156593 4760 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.107:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.180429 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.181176 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.196178 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.227604 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.235969 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.256915 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.275641 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.425323 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.425611 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.425887 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.426465 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.426930 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.427472 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.427745 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.436237 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.455959 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.476006 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:43 crc kubenswrapper[4760]: I0226 09:46:43.496326 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:44 crc kubenswrapper[4760]: I0226 09:46:44.174756 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wz6v" event={"ID":"5b918bed-a785-4a4d-a784-0860bdbadadf","Type":"ContainerStarted","Data":"e9a9b0f56d9653740f4a8a4a96af6a48592b959201b390bcfeb97d674a3a5748"} Feb 26 09:46:44 crc kubenswrapper[4760]: I0226 09:46:44.176242 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:44 crc kubenswrapper[4760]: I0226 09:46:44.177007 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:44 crc kubenswrapper[4760]: I0226 09:46:44.177329 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:44 crc kubenswrapper[4760]: I0226 09:46:44.177757 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:44 crc kubenswrapper[4760]: I0226 09:46:44.177777 4760 generic.go:334] "Generic (PLEG): container finished" podID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" containerID="cca58d2544314ed47085ddbb220223f9ff63b73a6c043d5baaca8e4c925da0a5" exitCode=0 Feb 26 09:46:44 crc kubenswrapper[4760]: I0226 09:46:44.177865 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" event={"ID":"dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28","Type":"ContainerDied","Data":"cca58d2544314ed47085ddbb220223f9ff63b73a6c043d5baaca8e4c925da0a5"} Feb 26 09:46:44 crc kubenswrapper[4760]: I0226 09:46:44.178098 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:44 crc kubenswrapper[4760]: I0226 09:46:44.178497 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:44 crc kubenswrapper[4760]: E0226 09:46:44.178806 4760 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.107:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 09:46:44 crc kubenswrapper[4760]: I0226 09:46:44.179025 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:44 crc kubenswrapper[4760]: I0226 09:46:44.179384 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:44 crc kubenswrapper[4760]: I0226 09:46:44.179692 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:44 crc kubenswrapper[4760]: I0226 09:46:44.180024 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:44 crc kubenswrapper[4760]: I0226 09:46:44.180318 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:44 crc kubenswrapper[4760]: I0226 09:46:44.182339 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:44 crc kubenswrapper[4760]: I0226 09:46:44.182756 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:44 crc kubenswrapper[4760]: I0226 09:46:44.183049 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:44 crc kubenswrapper[4760]: I0226 09:46:44.183362 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:44 crc kubenswrapper[4760]: I0226 09:46:44.183697 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:44 crc kubenswrapper[4760]: I0226 09:46:44.183934 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:44 crc kubenswrapper[4760]: I0226 09:46:44.184196 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:44 crc kubenswrapper[4760]: I0226 09:46:44.184588 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:44 crc kubenswrapper[4760]: I0226 09:46:44.184907 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:44 crc kubenswrapper[4760]: I0226 09:46:44.185291 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:44 crc kubenswrapper[4760]: I0226 09:46:44.185752 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:44 crc kubenswrapper[4760]: I0226 09:46:44.186099 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:44 crc kubenswrapper[4760]: I0226 09:46:44.186408 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:44 crc kubenswrapper[4760]: I0226 09:46:44.502318 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" podUID="e2b4386d-728b-43e0-83e7-030a977d88dd" containerName="oauth-openshift" containerID="cri-o://785f9a550d2d35149d52d4e37a5d639abfd426238f120f013bcc1cd37453ce61" gracePeriod=15 Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.333655 4760 generic.go:334] "Generic (PLEG): container finished" podID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" containerID="38c1e30efef3235fb1a3ce151ba35ea14cfd838459687e1854eb5b082d6db2c0" exitCode=0 Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.333721 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmvz4" event={"ID":"6ee6a724-49ab-489e-84b5-cc2f96c89dc2","Type":"ContainerDied","Data":"38c1e30efef3235fb1a3ce151ba35ea14cfd838459687e1854eb5b082d6db2c0"} Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.335098 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.335310 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.335495 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.335659 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.335803 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.335948 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.336093 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.336246 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.336404 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.336544 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.336713 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.336855 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.346614 4760 generic.go:334] "Generic (PLEG): container finished" podID="e2b4386d-728b-43e0-83e7-030a977d88dd" containerID="785f9a550d2d35149d52d4e37a5d639abfd426238f120f013bcc1cd37453ce61" exitCode=0 Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.346696 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" event={"ID":"e2b4386d-728b-43e0-83e7-030a977d88dd","Type":"ContainerDied","Data":"785f9a550d2d35149d52d4e37a5d639abfd426238f120f013bcc1cd37453ce61"} Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.348561 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pzmc2" event={"ID":"1e32cadf-ce42-42fd-85de-7cfd1fd43dea","Type":"ContainerStarted","Data":"264287c3e005a83717e66b131242ac9576d3fb565df6ac652b013ac2ea5af2dd"} Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.349727 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.349900 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.350087 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.350246 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.350384 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.350671 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.350999 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.351145 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.351295 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.351449 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.351627 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.351790 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.371796 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-895t9" event={"ID":"919bb2ab-9fbf-4a58-835e-8348eebaf093","Type":"ContainerStarted","Data":"acdd23263c194f3dc8c6283723c90fc0d4b9459bd0ad7924fce5f189d90547d7"} Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.373883 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.374081 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.374268 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.374478 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.374687 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.375012 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.375456 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.375867 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.376088 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.376310 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.376508 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.377276 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.379873 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8gj5" event={"ID":"bedbd455-baad-4b56-86b7-1d851407744b","Type":"ContainerStarted","Data":"875ff60f9acc61f8dce800086d928c545dead9d0d9a29cb51f40a4b87cd77929"} Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.381081 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.381244 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.381421 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.381610 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.381784 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.381934 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.385816 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.386460 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.386952 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.387193 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.387383 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.389536 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.391829 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j58zh" event={"ID":"d5f41609-3893-4649-be8b-2a3c839f082a","Type":"ContainerStarted","Data":"879ad8fd6745180635bf88238167f0b63f041cd0ac0b643383d0700b82b41a90"} Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.393066 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.393276 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.393433 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.393606 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.393793 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.393984 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.394179 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.394364 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.394510 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.394698 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.394860 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.395283 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.406806 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvl2n" event={"ID":"7427c503-5c81-488e-b0f0-61b2537a96a4","Type":"ContainerStarted","Data":"aee3c8e6f5fa71fe09f40c626762322dcade2b7b13f43b3d7035ef081c7fb530"} Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.408084 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.408315 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.408628 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.408837 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.409182 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.409395 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.409617 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.409799 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.410281 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.410548 4760 generic.go:334] "Generic (PLEG): container finished" podID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" containerID="f367bc7c9b2544818752741b517756d6e5ac5d8e28fdde1f51f901dc11977312" exitCode=0 Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.410755 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzjzl" event={"ID":"3e598e10-dd81-4dce-ad36-a44df83ae7fd","Type":"ContainerDied","Data":"f367bc7c9b2544818752741b517756d6e5ac5d8e28fdde1f51f901dc11977312"} Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.410933 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.415599 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.436286 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.459395 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.476529 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.496274 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.516189 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.582180 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.582703 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.582994 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.595837 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.615660 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.636483 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.656261 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:45 crc kubenswrapper[4760]: I0226 09:46:45.675990 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.263131 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.263749 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.263934 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.264082 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.264224 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2b4386d-728b-43e0-83e7-030a977d88dd" pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2tqr5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.267783 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.268243 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.268445 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.269082 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.269362 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.269557 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.273747 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.275764 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.277526 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.294523 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.295476 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.298771 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2b4386d-728b-43e0-83e7-030a977d88dd" pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2tqr5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.299257 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.299428 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.299559 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.299727 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.299865 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.299999 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.300263 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.300404 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.300750 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.301028 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.301436 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: E0226 09:46:46.336669 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="6.4s" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.421046 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" event={"ID":"dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28","Type":"ContainerDied","Data":"04082692dbcdbcf48949e9906bbb075b3ce2034743b7f317848c840de495202e"} Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.421108 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04082692dbcdbcf48949e9906bbb075b3ce2034743b7f317848c840de495202e" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.421217 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.425199 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmvz4" event={"ID":"6ee6a724-49ab-489e-84b5-cc2f96c89dc2","Type":"ContainerStarted","Data":"697aee0be4ddbe516c9cbce184bf48d955117c379230ffb50c8841dc2612d4dc"} Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.426633 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-cliconfig\") pod \"e2b4386d-728b-43e0-83e7-030a977d88dd\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.426718 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-user-template-login\") pod \"e2b4386d-728b-43e0-83e7-030a977d88dd\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.426799 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-user-template-error\") pod \"e2b4386d-728b-43e0-83e7-030a977d88dd\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.426842 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kxzm\" (UniqueName: \"kubernetes.io/projected/e2b4386d-728b-43e0-83e7-030a977d88dd-kube-api-access-2kxzm\") pod \"e2b4386d-728b-43e0-83e7-030a977d88dd\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.426878 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-trusted-ca-bundle\") pod \"e2b4386d-728b-43e0-83e7-030a977d88dd\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.426916 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-ocp-branding-template\") pod \"e2b4386d-728b-43e0-83e7-030a977d88dd\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.426947 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-user-template-provider-selection\") pod \"e2b4386d-728b-43e0-83e7-030a977d88dd\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.426995 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-session\") pod \"e2b4386d-728b-43e0-83e7-030a977d88dd\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.427040 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-router-certs\") pod \"e2b4386d-728b-43e0-83e7-030a977d88dd\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.427079 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e2b4386d-728b-43e0-83e7-030a977d88dd-audit-policies\") pod \"e2b4386d-728b-43e0-83e7-030a977d88dd\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.427121 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-serving-cert\") pod \"e2b4386d-728b-43e0-83e7-030a977d88dd\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.427195 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e2b4386d-728b-43e0-83e7-030a977d88dd-audit-dir\") pod \"e2b4386d-728b-43e0-83e7-030a977d88dd\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.427251 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-service-ca\") pod \"e2b4386d-728b-43e0-83e7-030a977d88dd\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.427291 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-user-idp-0-file-data\") pod \"e2b4386d-728b-43e0-83e7-030a977d88dd\" (UID: \"e2b4386d-728b-43e0-83e7-030a977d88dd\") " Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.427360 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f2kp\" (UniqueName: \"kubernetes.io/projected/dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28-kube-api-access-6f2kp\") pod \"dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28\" (UID: \"dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28\") " Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.428850 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2b4386d-728b-43e0-83e7-030a977d88dd-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e2b4386d-728b-43e0-83e7-030a977d88dd" (UID: "e2b4386d-728b-43e0-83e7-030a977d88dd"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.429090 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "e2b4386d-728b-43e0-83e7-030a977d88dd" (UID: "e2b4386d-728b-43e0-83e7-030a977d88dd"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.429145 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "e2b4386d-728b-43e0-83e7-030a977d88dd" (UID: "e2b4386d-728b-43e0-83e7-030a977d88dd"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.429355 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b4386d-728b-43e0-83e7-030a977d88dd-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e2b4386d-728b-43e0-83e7-030a977d88dd" (UID: "e2b4386d-728b-43e0-83e7-030a977d88dd"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.429814 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "e2b4386d-728b-43e0-83e7-030a977d88dd" (UID: "e2b4386d-728b-43e0-83e7-030a977d88dd"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.430131 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.430426 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.430653 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.430907 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.431124 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.431393 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.432029 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.432283 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.432554 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.432803 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2b4386d-728b-43e0-83e7-030a977d88dd" pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2tqr5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.433007 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.436272 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.437917 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" event={"ID":"e2b4386d-728b-43e0-83e7-030a977d88dd","Type":"ContainerDied","Data":"3d68b0eb600f589fa9d62900f446ea39b4601836fed40a57a6cdd667241dcbef"} Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.438034 4760 scope.go:117] "RemoveContainer" containerID="785f9a550d2d35149d52d4e37a5d639abfd426238f120f013bcc1cd37453ce61" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.438419 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.457183 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.478772 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.496664 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.517662 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.530204 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.531218 4760 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e2b4386d-728b-43e0-83e7-030a977d88dd-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.531277 4760 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e2b4386d-728b-43e0-83e7-030a977d88dd-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.531293 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.531341 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.536450 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.556818 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.576128 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.580029 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28-kube-api-access-6f2kp" (OuterVolumeSpecName: "kube-api-access-6f2kp") pod "dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" (UID: "dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28"). InnerVolumeSpecName "kube-api-access-6f2kp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.581053 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "e2b4386d-728b-43e0-83e7-030a977d88dd" (UID: "e2b4386d-728b-43e0-83e7-030a977d88dd"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.581465 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "e2b4386d-728b-43e0-83e7-030a977d88dd" (UID: "e2b4386d-728b-43e0-83e7-030a977d88dd"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.582309 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2b4386d-728b-43e0-83e7-030a977d88dd-kube-api-access-2kxzm" (OuterVolumeSpecName: "kube-api-access-2kxzm") pod "e2b4386d-728b-43e0-83e7-030a977d88dd" (UID: "e2b4386d-728b-43e0-83e7-030a977d88dd"). InnerVolumeSpecName "kube-api-access-2kxzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.582431 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "e2b4386d-728b-43e0-83e7-030a977d88dd" (UID: "e2b4386d-728b-43e0-83e7-030a977d88dd"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.582651 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "e2b4386d-728b-43e0-83e7-030a977d88dd" (UID: "e2b4386d-728b-43e0-83e7-030a977d88dd"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.583051 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "e2b4386d-728b-43e0-83e7-030a977d88dd" (UID: "e2b4386d-728b-43e0-83e7-030a977d88dd"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.583714 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "e2b4386d-728b-43e0-83e7-030a977d88dd" (UID: "e2b4386d-728b-43e0-83e7-030a977d88dd"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.583811 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "e2b4386d-728b-43e0-83e7-030a977d88dd" (UID: "e2b4386d-728b-43e0-83e7-030a977d88dd"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.584233 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "e2b4386d-728b-43e0-83e7-030a977d88dd" (UID: "e2b4386d-728b-43e0-83e7-030a977d88dd"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.596953 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.615537 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.632945 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.633098 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.633136 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kxzm\" (UniqueName: \"kubernetes.io/projected/e2b4386d-728b-43e0-83e7-030a977d88dd-kube-api-access-2kxzm\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.633148 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.633158 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.633168 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.633177 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.633187 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.633198 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e2b4386d-728b-43e0-83e7-030a977d88dd-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.633209 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f2kp\" (UniqueName: \"kubernetes.io/projected/dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28-kube-api-access-6f2kp\") on node \"crc\" DevicePath \"\"" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.635352 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.639856 4760 patch_prober.go:28] interesting pod/machine-config-daemon-2fsxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.639902 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" podUID="62f749b1-23a5-43f1-8568-b98b688944fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.656983 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.676514 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.696145 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.715518 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2b4386d-728b-43e0-83e7-030a977d88dd" pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2tqr5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.735593 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.755363 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.775894 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.796045 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.815324 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.835553 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.856216 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.876137 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.896014 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.915564 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.935820 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2b4386d-728b-43e0-83e7-030a977d88dd" pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2tqr5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.955776 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.975824 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:46 crc kubenswrapper[4760]: I0226 09:46:46.996499 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.016935 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.036408 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.056627 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.075776 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.096081 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.116081 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.135472 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.156127 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.175801 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2b4386d-728b-43e0-83e7-030a977d88dd" pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2tqr5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.196383 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.215688 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.235861 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:47 crc kubenswrapper[4760]: E0226 09:46:47.241467 4760 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/events\": dial tcp 38.102.83.107:6443: connect: connection refused" event=< Feb 26 09:46:47 crc kubenswrapper[4760]: &Event{ObjectMeta:{controller-manager-5ff4c4cbd8-snvhv.1897c2d548cbaa14 openshift-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager,Name:controller-manager-5ff4c4cbd8-snvhv,UID:8d70a973-5a18-4438-96cc-cc5393128039,APIVersion:v1,ResourceVersion:29644,FieldPath:spec.containers{controller-manager},},Reason:ProbeError,Message:Readiness probe error: Get "https://10.217.0.63:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 26 09:46:47 crc kubenswrapper[4760]: body: Feb 26 09:46:47 crc kubenswrapper[4760]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 09:46:39.698709012 +0000 UTC m=+244.832654505,LastTimestamp:2026-02-26 09:46:39.698709012 +0000 UTC m=+244.832654505,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 09:46:47 crc kubenswrapper[4760]: > Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.445877 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzjzl" event={"ID":"3e598e10-dd81-4dce-ad36-a44df83ae7fd","Type":"ContainerStarted","Data":"9a50be6219ea226f2f109703705f10e2658a981a41b2011f3137efea67583316"} Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.447093 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2b4386d-728b-43e0-83e7-030a977d88dd" pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2tqr5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.447564 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.447890 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.448353 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.448656 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.448900 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.449164 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.449448 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.449739 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.450095 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.455722 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.475891 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.495955 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.706529 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g8gj5" Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.706592 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g8gj5" Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.889787 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hvl2n" Feb 26 09:46:47 crc kubenswrapper[4760]: I0226 09:46:47.889856 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hvl2n" Feb 26 09:46:48 crc kubenswrapper[4760]: I0226 09:46:48.034902 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-895t9" Feb 26 09:46:48 crc kubenswrapper[4760]: I0226 09:46:48.035186 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-895t9" Feb 26 09:46:48 crc kubenswrapper[4760]: I0226 09:46:48.465373 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j58zh" Feb 26 09:46:48 crc kubenswrapper[4760]: I0226 09:46:48.465716 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j58zh" Feb 26 09:46:48 crc kubenswrapper[4760]: I0226 09:46:48.575852 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:46:48 crc kubenswrapper[4760]: I0226 09:46:48.576486 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:48 crc kubenswrapper[4760]: I0226 09:46:48.576963 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:48 crc kubenswrapper[4760]: I0226 09:46:48.577281 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:48 crc kubenswrapper[4760]: I0226 09:46:48.577664 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:48 crc kubenswrapper[4760]: I0226 09:46:48.578082 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:48 crc kubenswrapper[4760]: I0226 09:46:48.578286 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:48 crc kubenswrapper[4760]: I0226 09:46:48.578496 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:48 crc kubenswrapper[4760]: I0226 09:46:48.579342 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:48 crc kubenswrapper[4760]: I0226 09:46:48.579702 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:48 crc kubenswrapper[4760]: I0226 09:46:48.579894 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:48 crc kubenswrapper[4760]: I0226 09:46:48.580086 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2b4386d-728b-43e0-83e7-030a977d88dd" pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2tqr5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:48 crc kubenswrapper[4760]: I0226 09:46:48.580359 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:48 crc kubenswrapper[4760]: I0226 09:46:48.580780 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:48 crc kubenswrapper[4760]: I0226 09:46:48.598011 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1927678a-3d98-4c82-bff0-b6f12f41d4c0" Feb 26 09:46:48 crc kubenswrapper[4760]: I0226 09:46:48.598242 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1927678a-3d98-4c82-bff0-b6f12f41d4c0" Feb 26 09:46:48 crc kubenswrapper[4760]: E0226 09:46:48.598818 4760 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:46:48 crc kubenswrapper[4760]: I0226 09:46:48.599359 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:46:48 crc kubenswrapper[4760]: W0226 09:46:48.644482 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-cc9e12af4e472c0bb9d482afe09c35da949d4794374fe449b9434040f422fa7f WatchSource:0}: Error finding container cc9e12af4e472c0bb9d482afe09c35da949d4794374fe449b9434040f422fa7f: Status 404 returned error can't find the container with id cc9e12af4e472c0bb9d482afe09c35da949d4794374fe449b9434040f422fa7f Feb 26 09:46:49 crc kubenswrapper[4760]: I0226 09:46:49.132055 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-895t9" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" containerName="registry-server" probeResult="failure" output=< Feb 26 09:46:49 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Feb 26 09:46:49 crc kubenswrapper[4760]: > Feb 26 09:46:49 crc kubenswrapper[4760]: I0226 09:46:49.134199 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-g8gj5" podUID="bedbd455-baad-4b56-86b7-1d851407744b" containerName="registry-server" probeResult="failure" output=< Feb 26 09:46:49 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Feb 26 09:46:49 crc kubenswrapper[4760]: > Feb 26 09:46:49 crc kubenswrapper[4760]: I0226 09:46:49.218639 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hvl2n" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" containerName="registry-server" probeResult="failure" output=< Feb 26 09:46:49 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Feb 26 09:46:49 crc kubenswrapper[4760]: > Feb 26 09:46:49 crc kubenswrapper[4760]: I0226 09:46:49.479555 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e3c1ad4ca39d6cc02497c60e15b71c41970849b598ade1c8c66b236233d69dcf"} Feb 26 09:46:49 crc kubenswrapper[4760]: I0226 09:46:49.479651 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cc9e12af4e472c0bb9d482afe09c35da949d4794374fe449b9434040f422fa7f"} Feb 26 09:46:49 crc kubenswrapper[4760]: I0226 09:46:49.548631 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-j58zh" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" containerName="registry-server" probeResult="failure" output=< Feb 26 09:46:49 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Feb 26 09:46:49 crc kubenswrapper[4760]: > Feb 26 09:46:49 crc kubenswrapper[4760]: I0226 09:46:49.652205 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5wz6v" Feb 26 09:46:49 crc kubenswrapper[4760]: I0226 09:46:49.652258 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5wz6v" Feb 26 09:46:49 crc kubenswrapper[4760]: I0226 09:46:49.780430 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5wz6v" Feb 26 09:46:49 crc kubenswrapper[4760]: I0226 09:46:49.781086 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:49 crc kubenswrapper[4760]: I0226 09:46:49.781618 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:49 crc kubenswrapper[4760]: I0226 09:46:49.781814 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:49 crc kubenswrapper[4760]: I0226 09:46:49.781972 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:49 crc kubenswrapper[4760]: I0226 09:46:49.782121 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:49 crc kubenswrapper[4760]: I0226 09:46:49.782270 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:49 crc kubenswrapper[4760]: I0226 09:46:49.782413 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:49 crc kubenswrapper[4760]: I0226 09:46:49.782554 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:49 crc kubenswrapper[4760]: I0226 09:46:49.782730 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:49 crc kubenswrapper[4760]: I0226 09:46:49.782898 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:49 crc kubenswrapper[4760]: I0226 09:46:49.783043 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:49 crc kubenswrapper[4760]: I0226 09:46:49.783192 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2b4386d-728b-43e0-83e7-030a977d88dd" pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2tqr5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:49 crc kubenswrapper[4760]: I0226 09:46:49.783337 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:49 crc kubenswrapper[4760]: E0226 09:46:49.815406 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T09:46:49Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T09:46:49Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T09:46:49Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T09:46:49Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:49 crc kubenswrapper[4760]: E0226 09:46:49.815733 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:49 crc kubenswrapper[4760]: E0226 09:46:49.816210 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:49 crc kubenswrapper[4760]: E0226 09:46:49.816633 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:49 crc kubenswrapper[4760]: E0226 09:46:49.816869 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:49 crc kubenswrapper[4760]: E0226 09:46:49.816887 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.075803 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pzmc2" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.075845 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pzmc2" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.124721 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pzmc2" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.125459 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.125879 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.126173 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.126471 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.126773 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.127056 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.127500 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.128149 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.128478 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.128967 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.129369 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.129750 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2b4386d-728b-43e0-83e7-030a977d88dd" pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2tqr5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.130078 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.487120 4760 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="e3c1ad4ca39d6cc02497c60e15b71c41970849b598ade1c8c66b236233d69dcf" exitCode=0 Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.487212 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"e3c1ad4ca39d6cc02497c60e15b71c41970849b598ade1c8c66b236233d69dcf"} Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.487565 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1927678a-3d98-4c82-bff0-b6f12f41d4c0" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.487626 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1927678a-3d98-4c82-bff0-b6f12f41d4c0" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.487939 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: E0226 09:46:50.488147 4760 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.488235 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.488441 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.488820 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.490086 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.491936 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.492515 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.499859 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.500652 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.501293 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2b4386d-728b-43e0-83e7-030a977d88dd" pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2tqr5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.501511 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.501937 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.502338 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.528722 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5wz6v" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.529917 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.530383 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.530692 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.530923 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2b4386d-728b-43e0-83e7-030a977d88dd" pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2tqr5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.531172 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.531432 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.531855 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.532359 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.532840 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.533386 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.533705 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.533946 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.534268 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.534537 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pzmc2" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.535207 4760 status_manager.go:851] "Failed to get status for pod" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" pod="openshift-marketplace/redhat-marketplace-pzmc2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-pzmc2\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.535472 4760 status_manager.go:851] "Failed to get status for pod" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" pod="openshift-marketplace/certified-operators-895t9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-895t9\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.535755 4760 status_manager.go:851] "Failed to get status for pod" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" pod="openshift-marketplace/community-operators-j58zh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j58zh\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.536027 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2b4386d-728b-43e0-83e7-030a977d88dd" pod="openshift-authentication/oauth-openshift-558db77b4-2tqr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-2tqr5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.536289 4760 status_manager.go:851] "Failed to get status for pod" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" pod="openshift-marketplace/redhat-marketplace-5wz6v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-5wz6v\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.536703 4760 status_manager.go:851] "Failed to get status for pod" podUID="bedbd455-baad-4b56-86b7-1d851407744b" pod="openshift-marketplace/certified-operators-g8gj5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-g8gj5\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.537180 4760 status_manager.go:851] "Failed to get status for pod" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.537417 4760 status_manager.go:851] "Failed to get status for pod" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" pod="openshift-marketplace/community-operators-hvl2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hvl2n\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.537655 4760 status_manager.go:851] "Failed to get status for pod" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" pod="openshift-marketplace/redhat-operators-jmvz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jmvz4\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.537861 4760 status_manager.go:851] "Failed to get status for pod" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" pod="openshift-infra/auto-csr-approver-29534986-jrj4w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29534986-jrj4w\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.538061 4760 status_manager.go:851] "Failed to get status for pod" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" pod="openshift-marketplace/redhat-operators-zzjzl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zzjzl\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.538242 4760 status_manager.go:851] "Failed to get status for pod" podUID="8d70a973-5a18-4438-96cc-cc5393128039" pod="openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5ff4c4cbd8-snvhv\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:50 crc kubenswrapper[4760]: I0226 09:46:50.538466 4760 status_manager.go:851] "Failed to get status for pod" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" pod="openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-588fbc8984-p5prx\": dial tcp 38.102.83.107:6443: connect: connection refused" Feb 26 09:46:51 crc kubenswrapper[4760]: I0226 09:46:51.428637 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jmvz4" Feb 26 09:46:51 crc kubenswrapper[4760]: I0226 09:46:51.428769 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jmvz4" Feb 26 09:46:51 crc kubenswrapper[4760]: I0226 09:46:51.497314 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 26 09:46:51 crc kubenswrapper[4760]: I0226 09:46:51.497977 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 26 09:46:51 crc kubenswrapper[4760]: I0226 09:46:51.498128 4760 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="54fded501ee4a42db6029006dead3d4edaf44ba6c748b8ca880efd3b039cd24f" exitCode=1 Feb 26 09:46:51 crc kubenswrapper[4760]: I0226 09:46:51.498168 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"54fded501ee4a42db6029006dead3d4edaf44ba6c748b8ca880efd3b039cd24f"} Feb 26 09:46:51 crc kubenswrapper[4760]: I0226 09:46:51.498759 4760 scope.go:117] "RemoveContainer" containerID="54fded501ee4a42db6029006dead3d4edaf44ba6c748b8ca880efd3b039cd24f" Feb 26 09:46:51 crc kubenswrapper[4760]: I0226 09:46:51.501326 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e1d9129537d621cf6b9c2fd123914a1f34faaa59e18dddddea6730d6a4ecd9ac"} Feb 26 09:46:51 crc kubenswrapper[4760]: I0226 09:46:51.954469 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zzjzl" Feb 26 09:46:51 crc kubenswrapper[4760]: I0226 09:46:51.954825 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zzjzl" Feb 26 09:46:52 crc kubenswrapper[4760]: I0226 09:46:52.233331 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 09:46:52 crc kubenswrapper[4760]: I0226 09:46:52.493583 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jmvz4" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" containerName="registry-server" probeResult="failure" output=< Feb 26 09:46:52 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Feb 26 09:46:52 crc kubenswrapper[4760]: > Feb 26 09:46:52 crc kubenswrapper[4760]: I0226 09:46:52.511984 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b8cc10dac9d15c3a87b1c54e6aedd24a904c6e26f48ee624dd76ac8c8d71425b"} Feb 26 09:46:52 crc kubenswrapper[4760]: I0226 09:46:52.512024 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"91fc0d0b9bfce245e7d3a4e1265f4123d45087d9d0e68c140fdcc81ff63d9022"} Feb 26 09:46:52 crc kubenswrapper[4760]: I0226 09:46:52.512036 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e9bffac12f90776149812a326fd6aff0165d5ee1fe62d666bdab837374dda878"} Feb 26 09:46:52 crc kubenswrapper[4760]: I0226 09:46:52.521497 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 26 09:46:52 crc kubenswrapper[4760]: I0226 09:46:52.523096 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 26 09:46:52 crc kubenswrapper[4760]: I0226 09:46:52.523178 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"25afda1a3f4bdc3f3d64969629e98037c20c880dd29b88dec502440b14a09cbc"} Feb 26 09:46:53 crc kubenswrapper[4760]: I0226 09:46:53.052091 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zzjzl" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" containerName="registry-server" probeResult="failure" output=< Feb 26 09:46:53 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Feb 26 09:46:53 crc kubenswrapper[4760]: > Feb 26 09:46:53 crc kubenswrapper[4760]: I0226 09:46:53.641524 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a01ef8e237b44a962869b62d6ffab0473ed8141039967fb51ea4abdf55af9401"} Feb 26 09:46:53 crc kubenswrapper[4760]: I0226 09:46:53.641942 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1927678a-3d98-4c82-bff0-b6f12f41d4c0" Feb 26 09:46:53 crc kubenswrapper[4760]: I0226 09:46:53.641965 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1927678a-3d98-4c82-bff0-b6f12f41d4c0" Feb 26 09:46:57 crc kubenswrapper[4760]: I0226 09:46:57.330717 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 09:46:57 crc kubenswrapper[4760]: I0226 09:46:57.697449 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g8gj5" Feb 26 09:46:57 crc kubenswrapper[4760]: I0226 09:46:57.736571 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g8gj5" Feb 26 09:46:57 crc kubenswrapper[4760]: I0226 09:46:57.946236 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hvl2n" Feb 26 09:46:57 crc kubenswrapper[4760]: I0226 09:46:57.991908 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hvl2n" Feb 26 09:46:58 crc kubenswrapper[4760]: I0226 09:46:58.077549 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-895t9" Feb 26 09:46:58 crc kubenswrapper[4760]: I0226 09:46:58.129336 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-895t9" Feb 26 09:46:58 crc kubenswrapper[4760]: I0226 09:46:58.299273 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j58zh" Feb 26 09:46:58 crc kubenswrapper[4760]: I0226 09:46:58.346643 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j58zh" Feb 26 09:46:58 crc kubenswrapper[4760]: I0226 09:46:58.599794 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:46:58 crc kubenswrapper[4760]: I0226 09:46:58.600647 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:46:58 crc kubenswrapper[4760]: I0226 09:46:58.600702 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:46:58 crc kubenswrapper[4760]: I0226 09:46:58.613315 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:46:58 crc kubenswrapper[4760]: I0226 09:46:58.766554 4760 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:46:59 crc kubenswrapper[4760]: I0226 09:46:59.229230 4760 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="37335650-5072-4005-b980-beed3f1df844" Feb 26 09:46:59 crc kubenswrapper[4760]: I0226 09:46:59.699108 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1927678a-3d98-4c82-bff0-b6f12f41d4c0" Feb 26 09:46:59 crc kubenswrapper[4760]: I0226 09:46:59.699141 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1927678a-3d98-4c82-bff0-b6f12f41d4c0" Feb 26 09:46:59 crc kubenswrapper[4760]: I0226 09:46:59.704389 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:46:59 crc kubenswrapper[4760]: I0226 09:46:59.707409 4760 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="37335650-5072-4005-b980-beed3f1df844" Feb 26 09:47:00 crc kubenswrapper[4760]: I0226 09:47:00.706934 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1927678a-3d98-4c82-bff0-b6f12f41d4c0" Feb 26 09:47:00 crc kubenswrapper[4760]: I0226 09:47:00.707271 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1927678a-3d98-4c82-bff0-b6f12f41d4c0" Feb 26 09:47:00 crc kubenswrapper[4760]: I0226 09:47:00.713275 4760 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="37335650-5072-4005-b980-beed3f1df844" Feb 26 09:47:01 crc kubenswrapper[4760]: I0226 09:47:01.377030 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 09:47:01 crc kubenswrapper[4760]: I0226 09:47:01.426520 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 09:47:01 crc kubenswrapper[4760]: I0226 09:47:01.659295 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jmvz4" Feb 26 09:47:01 crc kubenswrapper[4760]: I0226 09:47:01.701874 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jmvz4" Feb 26 09:47:01 crc kubenswrapper[4760]: I0226 09:47:01.717372 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 09:47:01 crc kubenswrapper[4760]: I0226 09:47:01.996065 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zzjzl" Feb 26 09:47:02 crc kubenswrapper[4760]: I0226 09:47:02.030196 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zzjzl" Feb 26 09:47:08 crc kubenswrapper[4760]: I0226 09:47:08.828155 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 26 09:47:09 crc kubenswrapper[4760]: I0226 09:47:09.660966 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 26 09:47:09 crc kubenswrapper[4760]: I0226 09:47:09.991319 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 26 09:47:09 crc kubenswrapper[4760]: I0226 09:47:09.994975 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 26 09:47:10 crc kubenswrapper[4760]: I0226 09:47:10.402313 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 26 09:47:10 crc kubenswrapper[4760]: I0226 09:47:10.494651 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 26 09:47:10 crc kubenswrapper[4760]: I0226 09:47:10.696815 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 26 09:47:10 crc kubenswrapper[4760]: I0226 09:47:10.843854 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 26 09:47:10 crc kubenswrapper[4760]: I0226 09:47:10.863564 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 26 09:47:10 crc kubenswrapper[4760]: I0226 09:47:10.893435 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 26 09:47:11 crc kubenswrapper[4760]: I0226 09:47:11.014138 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 26 09:47:11 crc kubenswrapper[4760]: I0226 09:47:11.072791 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 26 09:47:11 crc kubenswrapper[4760]: I0226 09:47:11.124161 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 26 09:47:11 crc kubenswrapper[4760]: I0226 09:47:11.265493 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 26 09:47:11 crc kubenswrapper[4760]: I0226 09:47:11.308540 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 26 09:47:11 crc kubenswrapper[4760]: I0226 09:47:11.333173 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 26 09:47:11 crc kubenswrapper[4760]: I0226 09:47:11.341100 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 26 09:47:11 crc kubenswrapper[4760]: I0226 09:47:11.463406 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 26 09:47:11 crc kubenswrapper[4760]: I0226 09:47:11.594740 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 26 09:47:11 crc kubenswrapper[4760]: I0226 09:47:11.617034 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 26 09:47:11 crc kubenswrapper[4760]: I0226 09:47:11.687380 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 26 09:47:11 crc kubenswrapper[4760]: I0226 09:47:11.764051 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 26 09:47:11 crc kubenswrapper[4760]: I0226 09:47:11.798829 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 26 09:47:11 crc kubenswrapper[4760]: I0226 09:47:11.861781 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 26 09:47:11 crc kubenswrapper[4760]: I0226 09:47:11.955054 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 26 09:47:11 crc kubenswrapper[4760]: I0226 09:47:11.989975 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 26 09:47:12 crc kubenswrapper[4760]: I0226 09:47:12.244837 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 26 09:47:12 crc kubenswrapper[4760]: I0226 09:47:12.386356 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 26 09:47:12 crc kubenswrapper[4760]: I0226 09:47:12.403794 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 26 09:47:12 crc kubenswrapper[4760]: I0226 09:47:12.577601 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 26 09:47:12 crc kubenswrapper[4760]: I0226 09:47:12.637588 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 26 09:47:12 crc kubenswrapper[4760]: I0226 09:47:12.660534 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 26 09:47:12 crc kubenswrapper[4760]: I0226 09:47:12.676142 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 26 09:47:12 crc kubenswrapper[4760]: I0226 09:47:12.770699 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 26 09:47:12 crc kubenswrapper[4760]: I0226 09:47:12.782126 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 26 09:47:12 crc kubenswrapper[4760]: I0226 09:47:12.794302 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 26 09:47:12 crc kubenswrapper[4760]: I0226 09:47:12.818656 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 26 09:47:13 crc kubenswrapper[4760]: I0226 09:47:13.009063 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 26 09:47:13 crc kubenswrapper[4760]: I0226 09:47:13.027236 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 26 09:47:13 crc kubenswrapper[4760]: I0226 09:47:13.088865 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 26 09:47:13 crc kubenswrapper[4760]: I0226 09:47:13.118757 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 26 09:47:13 crc kubenswrapper[4760]: I0226 09:47:13.377963 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 26 09:47:13 crc kubenswrapper[4760]: I0226 09:47:13.683622 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 26 09:47:13 crc kubenswrapper[4760]: I0226 09:47:13.843699 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 26 09:47:13 crc kubenswrapper[4760]: I0226 09:47:13.866377 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 26 09:47:13 crc kubenswrapper[4760]: I0226 09:47:13.934522 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 26 09:47:13 crc kubenswrapper[4760]: I0226 09:47:13.937537 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 26 09:47:13 crc kubenswrapper[4760]: I0226 09:47:13.940170 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 26 09:47:14 crc kubenswrapper[4760]: I0226 09:47:14.013821 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 26 09:47:14 crc kubenswrapper[4760]: I0226 09:47:14.101940 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 26 09:47:14 crc kubenswrapper[4760]: I0226 09:47:14.182440 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 26 09:47:14 crc kubenswrapper[4760]: I0226 09:47:14.295418 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 26 09:47:14 crc kubenswrapper[4760]: I0226 09:47:14.550222 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 26 09:47:14 crc kubenswrapper[4760]: I0226 09:47:14.737858 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 26 09:47:14 crc kubenswrapper[4760]: I0226 09:47:14.739822 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 26 09:47:14 crc kubenswrapper[4760]: I0226 09:47:14.765363 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 26 09:47:14 crc kubenswrapper[4760]: I0226 09:47:14.765647 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 26 09:47:14 crc kubenswrapper[4760]: I0226 09:47:14.766625 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 26 09:47:14 crc kubenswrapper[4760]: I0226 09:47:14.809564 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 26 09:47:14 crc kubenswrapper[4760]: I0226 09:47:14.906474 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 26 09:47:14 crc kubenswrapper[4760]: I0226 09:47:14.910535 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 26 09:47:14 crc kubenswrapper[4760]: I0226 09:47:14.915020 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 26 09:47:14 crc kubenswrapper[4760]: I0226 09:47:14.952650 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 26 09:47:14 crc kubenswrapper[4760]: I0226 09:47:14.990772 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 26 09:47:15 crc kubenswrapper[4760]: I0226 09:47:15.056723 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 26 09:47:15 crc kubenswrapper[4760]: I0226 09:47:15.076857 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 26 09:47:15 crc kubenswrapper[4760]: I0226 09:47:15.150231 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 26 09:47:15 crc kubenswrapper[4760]: I0226 09:47:15.190955 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 26 09:47:15 crc kubenswrapper[4760]: I0226 09:47:15.209268 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 26 09:47:15 crc kubenswrapper[4760]: I0226 09:47:15.267024 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 26 09:47:15 crc kubenswrapper[4760]: I0226 09:47:15.343032 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 26 09:47:15 crc kubenswrapper[4760]: I0226 09:47:15.432762 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 26 09:47:15 crc kubenswrapper[4760]: I0226 09:47:15.515522 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 26 09:47:15 crc kubenswrapper[4760]: I0226 09:47:15.515878 4760 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 26 09:47:15 crc kubenswrapper[4760]: I0226 09:47:15.536790 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 26 09:47:15 crc kubenswrapper[4760]: I0226 09:47:15.597835 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 26 09:47:15 crc kubenswrapper[4760]: I0226 09:47:15.632977 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 26 09:47:15 crc kubenswrapper[4760]: I0226 09:47:15.787392 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 26 09:47:15 crc kubenswrapper[4760]: I0226 09:47:15.805325 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 26 09:47:15 crc kubenswrapper[4760]: I0226 09:47:15.869226 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 26 09:47:15 crc kubenswrapper[4760]: I0226 09:47:15.886736 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 26 09:47:15 crc kubenswrapper[4760]: I0226 09:47:15.921414 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 26 09:47:15 crc kubenswrapper[4760]: I0226 09:47:15.968930 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 26 09:47:16 crc kubenswrapper[4760]: I0226 09:47:16.088280 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 26 09:47:16 crc kubenswrapper[4760]: I0226 09:47:16.101280 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 26 09:47:16 crc kubenswrapper[4760]: I0226 09:47:16.121977 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 26 09:47:16 crc kubenswrapper[4760]: I0226 09:47:16.130626 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 26 09:47:16 crc kubenswrapper[4760]: I0226 09:47:16.266957 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 26 09:47:16 crc kubenswrapper[4760]: I0226 09:47:16.303593 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 26 09:47:16 crc kubenswrapper[4760]: I0226 09:47:16.484975 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 26 09:47:16 crc kubenswrapper[4760]: I0226 09:47:16.487820 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 26 09:47:16 crc kubenswrapper[4760]: I0226 09:47:16.504975 4760 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 26 09:47:16 crc kubenswrapper[4760]: I0226 09:47:16.558365 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 26 09:47:16 crc kubenswrapper[4760]: I0226 09:47:16.584529 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 26 09:47:16 crc kubenswrapper[4760]: I0226 09:47:16.637517 4760 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 26 09:47:16 crc kubenswrapper[4760]: I0226 09:47:16.640188 4760 patch_prober.go:28] interesting pod/machine-config-daemon-2fsxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:47:16 crc kubenswrapper[4760]: I0226 09:47:16.640248 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" podUID="62f749b1-23a5-43f1-8568-b98b688944fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:47:16 crc kubenswrapper[4760]: I0226 09:47:16.640297 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" Feb 26 09:47:16 crc kubenswrapper[4760]: I0226 09:47:16.640906 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f4efbe79637d17378d1e3c83568f1cb588976a61342df5089c0211e4fb3d69b9"} pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 09:47:16 crc kubenswrapper[4760]: I0226 09:47:16.640965 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" podUID="62f749b1-23a5-43f1-8568-b98b688944fc" containerName="machine-config-daemon" containerID="cri-o://f4efbe79637d17378d1e3c83568f1cb588976a61342df5089c0211e4fb3d69b9" gracePeriod=600 Feb 26 09:47:16 crc kubenswrapper[4760]: I0226 09:47:16.662086 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 26 09:47:16 crc kubenswrapper[4760]: I0226 09:47:16.680128 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 26 09:47:16 crc kubenswrapper[4760]: I0226 09:47:16.703278 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 26 09:47:16 crc kubenswrapper[4760]: I0226 09:47:16.709231 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 26 09:47:16 crc kubenswrapper[4760]: I0226 09:47:16.723944 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 26 09:47:16 crc kubenswrapper[4760]: I0226 09:47:16.924621 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 26 09:47:16 crc kubenswrapper[4760]: I0226 09:47:16.932490 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 26 09:47:16 crc kubenswrapper[4760]: I0226 09:47:16.956808 4760 generic.go:334] "Generic (PLEG): container finished" podID="62f749b1-23a5-43f1-8568-b98b688944fc" containerID="f4efbe79637d17378d1e3c83568f1cb588976a61342df5089c0211e4fb3d69b9" exitCode=0 Feb 26 09:47:16 crc kubenswrapper[4760]: I0226 09:47:16.956855 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" event={"ID":"62f749b1-23a5-43f1-8568-b98b688944fc","Type":"ContainerDied","Data":"f4efbe79637d17378d1e3c83568f1cb588976a61342df5089c0211e4fb3d69b9"} Feb 26 09:47:17 crc kubenswrapper[4760]: I0226 09:47:17.065079 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 26 09:47:17 crc kubenswrapper[4760]: I0226 09:47:17.315381 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 26 09:47:17 crc kubenswrapper[4760]: I0226 09:47:17.397566 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 26 09:47:17 crc kubenswrapper[4760]: I0226 09:47:17.600456 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 26 09:47:17 crc kubenswrapper[4760]: I0226 09:47:17.625628 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 26 09:47:17 crc kubenswrapper[4760]: I0226 09:47:17.625737 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 26 09:47:17 crc kubenswrapper[4760]: I0226 09:47:17.773231 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 26 09:47:17 crc kubenswrapper[4760]: I0226 09:47:17.885902 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 26 09:47:17 crc kubenswrapper[4760]: I0226 09:47:17.965378 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" event={"ID":"62f749b1-23a5-43f1-8568-b98b688944fc","Type":"ContainerStarted","Data":"e8a6b715fa4c1ecb177b72a20cf5ceb53a06a6669ca4244b7787f46455bad25b"} Feb 26 09:47:18 crc kubenswrapper[4760]: I0226 09:47:18.095175 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 26 09:47:18 crc kubenswrapper[4760]: I0226 09:47:18.100611 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 26 09:47:18 crc kubenswrapper[4760]: I0226 09:47:18.111623 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 26 09:47:18 crc kubenswrapper[4760]: I0226 09:47:18.153709 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 26 09:47:18 crc kubenswrapper[4760]: I0226 09:47:18.267041 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 26 09:47:18 crc kubenswrapper[4760]: I0226 09:47:18.310498 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 26 09:47:18 crc kubenswrapper[4760]: I0226 09:47:18.434960 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 26 09:47:18 crc kubenswrapper[4760]: I0226 09:47:18.504154 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 26 09:47:18 crc kubenswrapper[4760]: I0226 09:47:18.643044 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 26 09:47:18 crc kubenswrapper[4760]: I0226 09:47:18.661125 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 26 09:47:18 crc kubenswrapper[4760]: I0226 09:47:18.832085 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 26 09:47:18 crc kubenswrapper[4760]: I0226 09:47:18.906399 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 26 09:47:18 crc kubenswrapper[4760]: I0226 09:47:18.956727 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 26 09:47:19 crc kubenswrapper[4760]: I0226 09:47:19.000052 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 26 09:47:19 crc kubenswrapper[4760]: I0226 09:47:19.002072 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 26 09:47:19 crc kubenswrapper[4760]: I0226 09:47:19.232159 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 26 09:47:19 crc kubenswrapper[4760]: I0226 09:47:19.275076 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 26 09:47:19 crc kubenswrapper[4760]: I0226 09:47:19.283288 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 26 09:47:19 crc kubenswrapper[4760]: I0226 09:47:19.366086 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 26 09:47:19 crc kubenswrapper[4760]: I0226 09:47:19.367759 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 26 09:47:19 crc kubenswrapper[4760]: I0226 09:47:19.378186 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 26 09:47:19 crc kubenswrapper[4760]: I0226 09:47:19.402602 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 26 09:47:19 crc kubenswrapper[4760]: I0226 09:47:19.437584 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 26 09:47:19 crc kubenswrapper[4760]: I0226 09:47:19.744519 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 26 09:47:19 crc kubenswrapper[4760]: I0226 09:47:19.744612 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.418631 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.420844 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.420844 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.424479 4760 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.424564 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.424731 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.430715 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.431158 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.431451 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.431669 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.431938 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.432067 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.431511 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.431974 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.432801 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.432945 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.432971 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.445295 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.535371 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.538302 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.554047 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.620394 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.694062 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.702850 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.734827 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.758555 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.855808 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.900284 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.913426 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.947364 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.948907 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 26 09:47:20 crc kubenswrapper[4760]: I0226 09:47:20.978344 4760 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 26 09:47:21 crc kubenswrapper[4760]: I0226 09:47:21.071706 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 26 09:47:21 crc kubenswrapper[4760]: I0226 09:47:21.118107 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 26 09:47:21 crc kubenswrapper[4760]: I0226 09:47:21.245336 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 26 09:47:21 crc kubenswrapper[4760]: I0226 09:47:21.256073 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 26 09:47:21 crc kubenswrapper[4760]: I0226 09:47:21.312865 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 26 09:47:21 crc kubenswrapper[4760]: I0226 09:47:21.341521 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 26 09:47:21 crc kubenswrapper[4760]: I0226 09:47:21.363850 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 26 09:47:21 crc kubenswrapper[4760]: I0226 09:47:21.390420 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 26 09:47:21 crc kubenswrapper[4760]: I0226 09:47:21.411398 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 26 09:47:21 crc kubenswrapper[4760]: I0226 09:47:21.527707 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 26 09:47:21 crc kubenswrapper[4760]: I0226 09:47:21.533971 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 26 09:47:21 crc kubenswrapper[4760]: I0226 09:47:21.571151 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 26 09:47:21 crc kubenswrapper[4760]: I0226 09:47:21.592115 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 26 09:47:21 crc kubenswrapper[4760]: I0226 09:47:21.594670 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 26 09:47:21 crc kubenswrapper[4760]: I0226 09:47:21.610401 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 26 09:47:21 crc kubenswrapper[4760]: I0226 09:47:21.652432 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 26 09:47:21 crc kubenswrapper[4760]: I0226 09:47:21.667604 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 26 09:47:21 crc kubenswrapper[4760]: I0226 09:47:21.686207 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 26 09:47:21 crc kubenswrapper[4760]: I0226 09:47:21.725778 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 26 09:47:21 crc kubenswrapper[4760]: I0226 09:47:21.813375 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 26 09:47:21 crc kubenswrapper[4760]: I0226 09:47:21.836556 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 26 09:47:21 crc kubenswrapper[4760]: I0226 09:47:21.864275 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 26 09:47:21 crc kubenswrapper[4760]: I0226 09:47:21.884102 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 26 09:47:21 crc kubenswrapper[4760]: I0226 09:47:21.929838 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 26 09:47:22 crc kubenswrapper[4760]: I0226 09:47:22.064716 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 26 09:47:22 crc kubenswrapper[4760]: I0226 09:47:22.124514 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 26 09:47:22 crc kubenswrapper[4760]: I0226 09:47:22.124522 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 26 09:47:22 crc kubenswrapper[4760]: I0226 09:47:22.179023 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 26 09:47:22 crc kubenswrapper[4760]: I0226 09:47:22.204905 4760 ???:1] "http: TLS handshake error from 192.168.126.11:34992: no serving certificate available for the kubelet" Feb 26 09:47:22 crc kubenswrapper[4760]: I0226 09:47:22.229473 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 26 09:47:22 crc kubenswrapper[4760]: I0226 09:47:22.312865 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 26 09:47:22 crc kubenswrapper[4760]: I0226 09:47:22.418670 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 26 09:47:22 crc kubenswrapper[4760]: I0226 09:47:22.434126 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 26 09:47:22 crc kubenswrapper[4760]: I0226 09:47:22.461889 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 26 09:47:22 crc kubenswrapper[4760]: I0226 09:47:22.526963 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 26 09:47:22 crc kubenswrapper[4760]: I0226 09:47:22.529851 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 26 09:47:22 crc kubenswrapper[4760]: I0226 09:47:22.640326 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 26 09:47:22 crc kubenswrapper[4760]: I0226 09:47:22.654983 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 26 09:47:22 crc kubenswrapper[4760]: I0226 09:47:22.779981 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 26 09:47:22 crc kubenswrapper[4760]: I0226 09:47:22.821760 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 26 09:47:22 crc kubenswrapper[4760]: I0226 09:47:22.863795 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 26 09:47:22 crc kubenswrapper[4760]: I0226 09:47:22.922337 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 26 09:47:23 crc kubenswrapper[4760]: I0226 09:47:23.029247 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 26 09:47:23 crc kubenswrapper[4760]: I0226 09:47:23.162078 4760 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 26 09:47:23 crc kubenswrapper[4760]: I0226 09:47:23.275880 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 26 09:47:23 crc kubenswrapper[4760]: I0226 09:47:23.360210 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 26 09:47:23 crc kubenswrapper[4760]: I0226 09:47:23.381904 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 26 09:47:23 crc kubenswrapper[4760]: I0226 09:47:23.412854 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 26 09:47:23 crc kubenswrapper[4760]: I0226 09:47:23.413243 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 26 09:47:23 crc kubenswrapper[4760]: I0226 09:47:23.426523 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 26 09:47:23 crc kubenswrapper[4760]: I0226 09:47:23.426772 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 26 09:47:23 crc kubenswrapper[4760]: I0226 09:47:23.544476 4760 ???:1] "http: TLS handshake error from 192.168.126.11:48036: no serving certificate available for the kubelet" Feb 26 09:47:23 crc kubenswrapper[4760]: I0226 09:47:23.579520 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 26 09:47:23 crc kubenswrapper[4760]: I0226 09:47:23.758702 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 26 09:47:23 crc kubenswrapper[4760]: I0226 09:47:23.811228 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.243700 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.514918 4760 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.515298 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pzmc2" podStartSLOduration=42.802749615 podStartE2EDuration="2m45.515285347s" podCreationTimestamp="2026-02-26 09:44:39 +0000 UTC" firstStartedPulling="2026-02-26 09:44:41.966235854 +0000 UTC m=+127.100181347" lastFinishedPulling="2026-02-26 09:46:44.678771596 +0000 UTC m=+249.812717079" observedRunningTime="2026-02-26 09:46:58.914116423 +0000 UTC m=+264.048061916" watchObservedRunningTime="2026-02-26 09:47:24.515285347 +0000 UTC m=+289.649230830" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.515920 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5wz6v" podStartSLOduration=44.46915051 podStartE2EDuration="2m45.515914744s" podCreationTimestamp="2026-02-26 09:44:39 +0000 UTC" firstStartedPulling="2026-02-26 09:44:41.966226424 +0000 UTC m=+127.100171917" lastFinishedPulling="2026-02-26 09:46:43.012990658 +0000 UTC m=+248.146936151" observedRunningTime="2026-02-26 09:46:58.9983797 +0000 UTC m=+264.132325193" watchObservedRunningTime="2026-02-26 09:47:24.515914744 +0000 UTC m=+289.649860237" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.517972 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j58zh" podStartSLOduration=43.246067265 podStartE2EDuration="2m47.517962067s" podCreationTimestamp="2026-02-26 09:44:37 +0000 UTC" firstStartedPulling="2026-02-26 09:44:39.789567482 +0000 UTC m=+124.923512975" lastFinishedPulling="2026-02-26 09:46:44.061462284 +0000 UTC m=+249.195407777" observedRunningTime="2026-02-26 09:46:58.983087331 +0000 UTC m=+264.117032844" watchObservedRunningTime="2026-02-26 09:47:24.517962067 +0000 UTC m=+289.651907550" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.518058 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jmvz4" podStartSLOduration=41.478700591 podStartE2EDuration="2m44.51805493s" podCreationTimestamp="2026-02-26 09:44:40 +0000 UTC" firstStartedPulling="2026-02-26 09:44:42.975543766 +0000 UTC m=+128.109489259" lastFinishedPulling="2026-02-26 09:46:46.014898105 +0000 UTC m=+251.148843598" observedRunningTime="2026-02-26 09:46:59.059548445 +0000 UTC m=+264.193493938" watchObservedRunningTime="2026-02-26 09:47:24.51805493 +0000 UTC m=+289.652000423" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.518272 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zzjzl" podStartSLOduration=41.355917286 podStartE2EDuration="2m43.518268845s" podCreationTimestamp="2026-02-26 09:44:41 +0000 UTC" firstStartedPulling="2026-02-26 09:44:43.986370819 +0000 UTC m=+129.120316312" lastFinishedPulling="2026-02-26 09:46:46.148722378 +0000 UTC m=+251.282667871" observedRunningTime="2026-02-26 09:46:58.839963649 +0000 UTC m=+263.973909142" watchObservedRunningTime="2026-02-26 09:47:24.518268845 +0000 UTC m=+289.652214338" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.519146 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hvl2n" podStartSLOduration=42.13943111 podStartE2EDuration="2m47.519140048s" podCreationTimestamp="2026-02-26 09:44:37 +0000 UTC" firstStartedPulling="2026-02-26 09:44:38.681277783 +0000 UTC m=+123.815223276" lastFinishedPulling="2026-02-26 09:46:44.060986721 +0000 UTC m=+249.194932214" observedRunningTime="2026-02-26 09:46:59.043297251 +0000 UTC m=+264.177242744" watchObservedRunningTime="2026-02-26 09:47:24.519140048 +0000 UTC m=+289.653085531" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.519764 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-895t9" podStartSLOduration=43.251388331 podStartE2EDuration="2m47.519757864s" podCreationTimestamp="2026-02-26 09:44:37 +0000 UTC" firstStartedPulling="2026-02-26 09:44:39.792180247 +0000 UTC m=+124.926125740" lastFinishedPulling="2026-02-26 09:46:44.06054978 +0000 UTC m=+249.194495273" observedRunningTime="2026-02-26 09:46:58.927089641 +0000 UTC m=+264.061035144" watchObservedRunningTime="2026-02-26 09:47:24.519757864 +0000 UTC m=+289.653703357" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.532030 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g8gj5" podStartSLOduration=41.46266808 podStartE2EDuration="2m47.532009883s" podCreationTimestamp="2026-02-26 09:44:37 +0000 UTC" firstStartedPulling="2026-02-26 09:44:38.672550424 +0000 UTC m=+123.806495917" lastFinishedPulling="2026-02-26 09:46:44.741892227 +0000 UTC m=+249.875837720" observedRunningTime="2026-02-26 09:46:59.015198739 +0000 UTC m=+264.149144232" watchObservedRunningTime="2026-02-26 09:47:24.532009883 +0000 UTC m=+289.665955376" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.534482 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588fbc8984-p5prx","openshift-controller-manager/controller-manager-5ff4c4cbd8-snvhv","openshift-authentication/oauth-openshift-558db77b4-2tqr5","openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.534563 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr","openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc","openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-944999897-55dh8"] Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.535073 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1927678a-3d98-4c82-bff0-b6f12f41d4c0" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.535097 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1927678a-3d98-4c82-bff0-b6f12f41d4c0" Feb 26 09:47:24 crc kubenswrapper[4760]: E0226 09:47:24.535760 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2b4386d-728b-43e0-83e7-030a977d88dd" containerName="oauth-openshift" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.535780 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2b4386d-728b-43e0-83e7-030a977d88dd" containerName="oauth-openshift" Feb 26 09:47:24 crc kubenswrapper[4760]: E0226 09:47:24.535792 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" containerName="installer" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.535798 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" containerName="installer" Feb 26 09:47:24 crc kubenswrapper[4760]: E0226 09:47:24.535808 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" containerName="oc" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.535815 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" containerName="oc" Feb 26 09:47:24 crc kubenswrapper[4760]: E0226 09:47:24.535824 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d70a973-5a18-4438-96cc-cc5393128039" containerName="controller-manager" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.535830 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d70a973-5a18-4438-96cc-cc5393128039" containerName="controller-manager" Feb 26 09:47:24 crc kubenswrapper[4760]: E0226 09:47:24.535841 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" containerName="route-controller-manager" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.535848 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" containerName="route-controller-manager" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.535948 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2b4386d-728b-43e0-83e7-030a977d88dd" containerName="oauth-openshift" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.535963 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2217860c-1b72-4728-9f27-d13f66cd5e7b" containerName="installer" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.535971 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" containerName="route-controller-manager" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.535980 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d70a973-5a18-4438-96cc-cc5393128039" containerName="controller-manager" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.535989 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" containerName="oc" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.536421 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.538365 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.538693 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.541089 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc"] Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.543529 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.544240 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.544380 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.544493 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.544670 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.544772 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.544925 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.545024 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.545139 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.545270 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.545403 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.545659 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.545857 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.546123 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.546271 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.546403 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.546524 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.546675 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.546772 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.546898 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.547954 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.550700 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.553277 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr"] Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.563845 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.593378 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d70a973-5a18-4438-96cc-cc5393128039" path="/var/lib/kubelet/pods/8d70a973-5a18-4438-96cc-cc5393128039/volumes" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.594194 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2" path="/var/lib/kubelet/pods/e2a77cf4-474a-4ff1-b9dc-b5e20339a4a2/volumes" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.594952 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2b4386d-728b-43e0-83e7-030a977d88dd" path="/var/lib/kubelet/pods/e2b4386d-728b-43e0-83e7-030a977d88dd/volumes" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.773452 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16f1a53a-8069-4e84-b214-44a7222f0d86-serving-cert\") pod \"controller-manager-7c58c4bf8-mzvnc\" (UID: \"16f1a53a-8069-4e84-b214-44a7222f0d86\") " pod="openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.773527 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv52d\" (UniqueName: \"kubernetes.io/projected/00504ebf-54d7-433d-9540-5200a1dc75fb-kube-api-access-gv52d\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.773561 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-user-template-error\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.773615 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/00504ebf-54d7-433d-9540-5200a1dc75fb-audit-dir\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.773645 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8fece2e-a547-4ef8-b1c6-6ada90c28798-serving-cert\") pod \"route-controller-manager-6df4c84df5-rhvxr\" (UID: \"f8fece2e-a547-4ef8-b1c6-6ada90c28798\") " pod="openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.773676 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8fece2e-a547-4ef8-b1c6-6ada90c28798-config\") pod \"route-controller-manager-6df4c84df5-rhvxr\" (UID: \"f8fece2e-a547-4ef8-b1c6-6ada90c28798\") " pod="openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.773705 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.773774 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.773803 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16f1a53a-8069-4e84-b214-44a7222f0d86-proxy-ca-bundles\") pod \"controller-manager-7c58c4bf8-mzvnc\" (UID: \"16f1a53a-8069-4e84-b214-44a7222f0d86\") " pod="openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.773860 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqkx4\" (UniqueName: \"kubernetes.io/projected/16f1a53a-8069-4e84-b214-44a7222f0d86-kube-api-access-rqkx4\") pod \"controller-manager-7c58c4bf8-mzvnc\" (UID: \"16f1a53a-8069-4e84-b214-44a7222f0d86\") " pod="openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.773891 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-user-template-login\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.773993 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bxqg\" (UniqueName: \"kubernetes.io/projected/f8fece2e-a547-4ef8-b1c6-6ada90c28798-kube-api-access-8bxqg\") pod \"route-controller-manager-6df4c84df5-rhvxr\" (UID: \"f8fece2e-a547-4ef8-b1c6-6ada90c28798\") " pod="openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.774025 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16f1a53a-8069-4e84-b214-44a7222f0d86-client-ca\") pod \"controller-manager-7c58c4bf8-mzvnc\" (UID: \"16f1a53a-8069-4e84-b214-44a7222f0d86\") " pod="openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.774133 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-system-router-certs\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.774172 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.774379 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-system-session\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.775446 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.775676 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8fece2e-a547-4ef8-b1c6-6ada90c28798-client-ca\") pod \"route-controller-manager-6df4c84df5-rhvxr\" (UID: \"f8fece2e-a547-4ef8-b1c6-6ada90c28798\") " pod="openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.775931 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/00504ebf-54d7-433d-9540-5200a1dc75fb-audit-policies\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.776128 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.776292 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-system-service-ca\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.776452 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f1a53a-8069-4e84-b214-44a7222f0d86-config\") pod \"controller-manager-7c58c4bf8-mzvnc\" (UID: \"16f1a53a-8069-4e84-b214-44a7222f0d86\") " pod="openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.776747 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.878876 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-system-service-ca\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.879158 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f1a53a-8069-4e84-b214-44a7222f0d86-config\") pod \"controller-manager-7c58c4bf8-mzvnc\" (UID: \"16f1a53a-8069-4e84-b214-44a7222f0d86\") " pod="openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.879262 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.879405 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16f1a53a-8069-4e84-b214-44a7222f0d86-serving-cert\") pod \"controller-manager-7c58c4bf8-mzvnc\" (UID: \"16f1a53a-8069-4e84-b214-44a7222f0d86\") " pod="openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.879508 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv52d\" (UniqueName: \"kubernetes.io/projected/00504ebf-54d7-433d-9540-5200a1dc75fb-kube-api-access-gv52d\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.879615 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.880107 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.879620 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-user-template-error\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.880487 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/00504ebf-54d7-433d-9540-5200a1dc75fb-audit-dir\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.880527 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8fece2e-a547-4ef8-b1c6-6ada90c28798-serving-cert\") pod \"route-controller-manager-6df4c84df5-rhvxr\" (UID: \"f8fece2e-a547-4ef8-b1c6-6ada90c28798\") " pod="openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.880560 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8fece2e-a547-4ef8-b1c6-6ada90c28798-config\") pod \"route-controller-manager-6df4c84df5-rhvxr\" (UID: \"f8fece2e-a547-4ef8-b1c6-6ada90c28798\") " pod="openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.880608 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.880666 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.882538 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16f1a53a-8069-4e84-b214-44a7222f0d86-proxy-ca-bundles\") pod \"controller-manager-7c58c4bf8-mzvnc\" (UID: \"16f1a53a-8069-4e84-b214-44a7222f0d86\") " pod="openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.882693 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqkx4\" (UniqueName: \"kubernetes.io/projected/16f1a53a-8069-4e84-b214-44a7222f0d86-kube-api-access-rqkx4\") pod \"controller-manager-7c58c4bf8-mzvnc\" (UID: \"16f1a53a-8069-4e84-b214-44a7222f0d86\") " pod="openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.882750 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-user-template-login\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.882809 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bxqg\" (UniqueName: \"kubernetes.io/projected/f8fece2e-a547-4ef8-b1c6-6ada90c28798-kube-api-access-8bxqg\") pod \"route-controller-manager-6df4c84df5-rhvxr\" (UID: \"f8fece2e-a547-4ef8-b1c6-6ada90c28798\") " pod="openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.882840 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16f1a53a-8069-4e84-b214-44a7222f0d86-client-ca\") pod \"controller-manager-7c58c4bf8-mzvnc\" (UID: \"16f1a53a-8069-4e84-b214-44a7222f0d86\") " pod="openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.882872 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-system-router-certs\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.882907 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.883373 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-system-session\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.883425 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.883452 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8fece2e-a547-4ef8-b1c6-6ada90c28798-client-ca\") pod \"route-controller-manager-6df4c84df5-rhvxr\" (UID: \"f8fece2e-a547-4ef8-b1c6-6ada90c28798\") " pod="openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.883517 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/00504ebf-54d7-433d-9540-5200a1dc75fb-audit-policies\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.883561 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.883874 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8fece2e-a547-4ef8-b1c6-6ada90c28798-config\") pod \"route-controller-manager-6df4c84df5-rhvxr\" (UID: \"f8fece2e-a547-4ef8-b1c6-6ada90c28798\") " pod="openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.884278 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.881450 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/00504ebf-54d7-433d-9540-5200a1dc75fb-audit-dir\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.882249 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-system-service-ca\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.887390 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8fece2e-a547-4ef8-b1c6-6ada90c28798-client-ca\") pod \"route-controller-manager-6df4c84df5-rhvxr\" (UID: \"f8fece2e-a547-4ef8-b1c6-6ada90c28798\") " pod="openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.887560 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/00504ebf-54d7-433d-9540-5200a1dc75fb-audit-policies\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.888453 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f1a53a-8069-4e84-b214-44a7222f0d86-config\") pod \"controller-manager-7c58c4bf8-mzvnc\" (UID: \"16f1a53a-8069-4e84-b214-44a7222f0d86\") " pod="openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.894963 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.897524 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.897953 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-user-template-error\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.897982 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-system-session\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.900485 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.900487 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16f1a53a-8069-4e84-b214-44a7222f0d86-serving-cert\") pod \"controller-manager-7c58c4bf8-mzvnc\" (UID: \"16f1a53a-8069-4e84-b214-44a7222f0d86\") " pod="openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.901179 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8fece2e-a547-4ef8-b1c6-6ada90c28798-serving-cert\") pod \"route-controller-manager-6df4c84df5-rhvxr\" (UID: \"f8fece2e-a547-4ef8-b1c6-6ada90c28798\") " pod="openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.901340 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16f1a53a-8069-4e84-b214-44a7222f0d86-client-ca\") pod \"controller-manager-7c58c4bf8-mzvnc\" (UID: \"16f1a53a-8069-4e84-b214-44a7222f0d86\") " pod="openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.904002 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-system-router-certs\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.913265 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.953892 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 26 09:47:24 crc kubenswrapper[4760]: I0226 09:47:24.956049 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 26 09:47:25 crc kubenswrapper[4760]: I0226 09:47:25.001099 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-user-template-login\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:25 crc kubenswrapper[4760]: I0226 09:47:25.019173 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 09:47:25 crc kubenswrapper[4760]: I0226 09:47:25.020344 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 26 09:47:25 crc kubenswrapper[4760]: I0226 09:47:25.022689 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:25 crc kubenswrapper[4760]: I0226 09:47:25.027496 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16f1a53a-8069-4e84-b214-44a7222f0d86-proxy-ca-bundles\") pod \"controller-manager-7c58c4bf8-mzvnc\" (UID: \"16f1a53a-8069-4e84-b214-44a7222f0d86\") " pod="openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc" Feb 26 09:47:25 crc kubenswrapper[4760]: I0226 09:47:25.039799 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 26 09:47:25 crc kubenswrapper[4760]: I0226 09:47:25.045110 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqkx4\" (UniqueName: \"kubernetes.io/projected/16f1a53a-8069-4e84-b214-44a7222f0d86-kube-api-access-rqkx4\") pod \"controller-manager-7c58c4bf8-mzvnc\" (UID: \"16f1a53a-8069-4e84-b214-44a7222f0d86\") " pod="openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc" Feb 26 09:47:25 crc kubenswrapper[4760]: I0226 09:47:25.049059 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv52d\" (UniqueName: \"kubernetes.io/projected/00504ebf-54d7-433d-9540-5200a1dc75fb-kube-api-access-gv52d\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:25 crc kubenswrapper[4760]: I0226 09:47:25.057589 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/00504ebf-54d7-433d-9540-5200a1dc75fb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-944999897-55dh8\" (UID: \"00504ebf-54d7-433d-9540-5200a1dc75fb\") " pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:25 crc kubenswrapper[4760]: I0226 09:47:25.060916 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bxqg\" (UniqueName: \"kubernetes.io/projected/f8fece2e-a547-4ef8-b1c6-6ada90c28798-kube-api-access-8bxqg\") pod \"route-controller-manager-6df4c84df5-rhvxr\" (UID: \"f8fece2e-a547-4ef8-b1c6-6ada90c28798\") " pod="openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr" Feb 26 09:47:25 crc kubenswrapper[4760]: I0226 09:47:25.070914 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 26 09:47:25 crc kubenswrapper[4760]: I0226 09:47:25.169955 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:25 crc kubenswrapper[4760]: I0226 09:47:25.175851 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=27.175820131 podStartE2EDuration="27.175820131s" podCreationTimestamp="2026-02-26 09:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:47:25.174675252 +0000 UTC m=+290.308620745" watchObservedRunningTime="2026-02-26 09:47:25.175820131 +0000 UTC m=+290.309765624" Feb 26 09:47:25 crc kubenswrapper[4760]: I0226 09:47:25.193033 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr" Feb 26 09:47:25 crc kubenswrapper[4760]: I0226 09:47:25.205789 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc" Feb 26 09:47:25 crc kubenswrapper[4760]: I0226 09:47:25.937446 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-944999897-55dh8"] Feb 26 09:47:25 crc kubenswrapper[4760]: I0226 09:47:25.969856 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc"] Feb 26 09:47:26 crc kubenswrapper[4760]: I0226 09:47:26.019074 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-944999897-55dh8" event={"ID":"00504ebf-54d7-433d-9540-5200a1dc75fb","Type":"ContainerStarted","Data":"b9da147889bddc4eff6125cfb630d36a7fdb3be59eea3f802f59c8f9acff7870"} Feb 26 09:47:26 crc kubenswrapper[4760]: I0226 09:47:26.021924 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc" event={"ID":"16f1a53a-8069-4e84-b214-44a7222f0d86","Type":"ContainerStarted","Data":"41e847c4c570fa38bb4681bf052d84abe7e9ac1d14c0f9a643e9390a7d728499"} Feb 26 09:47:26 crc kubenswrapper[4760]: I0226 09:47:26.044068 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr"] Feb 26 09:47:27 crc kubenswrapper[4760]: I0226 09:47:27.030411 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-944999897-55dh8" event={"ID":"00504ebf-54d7-433d-9540-5200a1dc75fb","Type":"ContainerStarted","Data":"1196851dfcb3c7d8c72d258bc9542a5c735f64387a3135c2abd24db308213d63"} Feb 26 09:47:27 crc kubenswrapper[4760]: I0226 09:47:27.031721 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:27 crc kubenswrapper[4760]: I0226 09:47:27.034449 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr" event={"ID":"f8fece2e-a547-4ef8-b1c6-6ada90c28798","Type":"ContainerStarted","Data":"43465163b57a22d2b4de99dc8ad88292270144138744901a869f09ba5a991301"} Feb 26 09:47:27 crc kubenswrapper[4760]: I0226 09:47:27.034508 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr" event={"ID":"f8fece2e-a547-4ef8-b1c6-6ada90c28798","Type":"ContainerStarted","Data":"909a5ff9d9774743c76818a8181a843660f7af934b59c3cad9752d6bfd568f12"} Feb 26 09:47:27 crc kubenswrapper[4760]: I0226 09:47:27.035021 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr" Feb 26 09:47:27 crc kubenswrapper[4760]: I0226 09:47:27.036890 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc" event={"ID":"16f1a53a-8069-4e84-b214-44a7222f0d86","Type":"ContainerStarted","Data":"fc5b78b9123d10fe854bdf9418a6d16e22a0b05fec6a38a8860df61e5ac34b46"} Feb 26 09:47:27 crc kubenswrapper[4760]: I0226 09:47:27.038794 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc" Feb 26 09:47:27 crc kubenswrapper[4760]: I0226 09:47:27.124938 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-944999897-55dh8" podStartSLOduration=68.124910906 podStartE2EDuration="1m8.124910906s" podCreationTimestamp="2026-02-26 09:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:47:27.097678406 +0000 UTC m=+292.231623929" watchObservedRunningTime="2026-02-26 09:47:27.124910906 +0000 UTC m=+292.258856399" Feb 26 09:47:27 crc kubenswrapper[4760]: I0226 09:47:27.157783 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr" podStartSLOduration=53.157757102 podStartE2EDuration="53.157757102s" podCreationTimestamp="2026-02-26 09:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:47:27.130040929 +0000 UTC m=+292.263986422" watchObservedRunningTime="2026-02-26 09:47:27.157757102 +0000 UTC m=+292.291702595" Feb 26 09:47:27 crc kubenswrapper[4760]: I0226 09:47:27.159892 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr" Feb 26 09:47:27 crc kubenswrapper[4760]: I0226 09:47:27.161202 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc" Feb 26 09:47:27 crc kubenswrapper[4760]: I0226 09:47:27.183394 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc" podStartSLOduration=53.18337296 podStartE2EDuration="53.18337296s" podCreationTimestamp="2026-02-26 09:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:47:27.162004563 +0000 UTC m=+292.295950056" watchObservedRunningTime="2026-02-26 09:47:27.18337296 +0000 UTC m=+292.317318453" Feb 26 09:47:27 crc kubenswrapper[4760]: I0226 09:47:27.187767 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-944999897-55dh8" Feb 26 09:47:31 crc kubenswrapper[4760]: I0226 09:47:31.809298 4760 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 26 09:47:31 crc kubenswrapper[4760]: I0226 09:47:31.809898 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://890867ef08188658861a8fc0f649ff679e2733c6ddc952f2dce618ca2c4af0e6" gracePeriod=5 Feb 26 09:47:34 crc kubenswrapper[4760]: I0226 09:47:34.380933 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc"] Feb 26 09:47:34 crc kubenswrapper[4760]: I0226 09:47:34.381977 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc" podUID="16f1a53a-8069-4e84-b214-44a7222f0d86" containerName="controller-manager" containerID="cri-o://fc5b78b9123d10fe854bdf9418a6d16e22a0b05fec6a38a8860df61e5ac34b46" gracePeriod=30 Feb 26 09:47:34 crc kubenswrapper[4760]: I0226 09:47:34.472350 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr"] Feb 26 09:47:34 crc kubenswrapper[4760]: I0226 09:47:34.472646 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr" podUID="f8fece2e-a547-4ef8-b1c6-6ada90c28798" containerName="route-controller-manager" containerID="cri-o://43465163b57a22d2b4de99dc8ad88292270144138744901a869f09ba5a991301" gracePeriod=30 Feb 26 09:47:35 crc kubenswrapper[4760]: I0226 09:47:35.194758 4760 patch_prober.go:28] interesting pod/route-controller-manager-6df4c84df5-rhvxr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" start-of-body= Feb 26 09:47:35 crc kubenswrapper[4760]: I0226 09:47:35.195129 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr" podUID="f8fece2e-a547-4ef8-b1c6-6ada90c28798" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" Feb 26 09:47:35 crc kubenswrapper[4760]: I0226 09:47:35.215020 4760 patch_prober.go:28] interesting pod/controller-manager-7c58c4bf8-mzvnc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" start-of-body= Feb 26 09:47:35 crc kubenswrapper[4760]: I0226 09:47:35.215087 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc" podUID="16f1a53a-8069-4e84-b214-44a7222f0d86" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.033070 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.088679 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc"] Feb 26 09:47:36 crc kubenswrapper[4760]: E0226 09:47:36.089001 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.089016 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 26 09:47:36 crc kubenswrapper[4760]: E0226 09:47:36.089025 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8fece2e-a547-4ef8-b1c6-6ada90c28798" containerName="route-controller-manager" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.089035 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8fece2e-a547-4ef8-b1c6-6ada90c28798" containerName="route-controller-manager" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.089156 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8fece2e-a547-4ef8-b1c6-6ada90c28798" containerName="route-controller-manager" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.089179 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.092479 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.096501 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc"] Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.117163 4760 generic.go:334] "Generic (PLEG): container finished" podID="16f1a53a-8069-4e84-b214-44a7222f0d86" containerID="fc5b78b9123d10fe854bdf9418a6d16e22a0b05fec6a38a8860df61e5ac34b46" exitCode=0 Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.117289 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc" event={"ID":"16f1a53a-8069-4e84-b214-44a7222f0d86","Type":"ContainerDied","Data":"fc5b78b9123d10fe854bdf9418a6d16e22a0b05fec6a38a8860df61e5ac34b46"} Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.119670 4760 generic.go:334] "Generic (PLEG): container finished" podID="f8fece2e-a547-4ef8-b1c6-6ada90c28798" containerID="43465163b57a22d2b4de99dc8ad88292270144138744901a869f09ba5a991301" exitCode=0 Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.119726 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr" event={"ID":"f8fece2e-a547-4ef8-b1c6-6ada90c28798","Type":"ContainerDied","Data":"43465163b57a22d2b4de99dc8ad88292270144138744901a869f09ba5a991301"} Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.119767 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr" event={"ID":"f8fece2e-a547-4ef8-b1c6-6ada90c28798","Type":"ContainerDied","Data":"909a5ff9d9774743c76818a8181a843660f7af934b59c3cad9752d6bfd568f12"} Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.119802 4760 scope.go:117] "RemoveContainer" containerID="43465163b57a22d2b4de99dc8ad88292270144138744901a869f09ba5a991301" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.120118 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.178966 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8fece2e-a547-4ef8-b1c6-6ada90c28798-serving-cert\") pod \"f8fece2e-a547-4ef8-b1c6-6ada90c28798\" (UID: \"f8fece2e-a547-4ef8-b1c6-6ada90c28798\") " Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.179060 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8fece2e-a547-4ef8-b1c6-6ada90c28798-client-ca\") pod \"f8fece2e-a547-4ef8-b1c6-6ada90c28798\" (UID: \"f8fece2e-a547-4ef8-b1c6-6ada90c28798\") " Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.179091 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bxqg\" (UniqueName: \"kubernetes.io/projected/f8fece2e-a547-4ef8-b1c6-6ada90c28798-kube-api-access-8bxqg\") pod \"f8fece2e-a547-4ef8-b1c6-6ada90c28798\" (UID: \"f8fece2e-a547-4ef8-b1c6-6ada90c28798\") " Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.179149 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8fece2e-a547-4ef8-b1c6-6ada90c28798-config\") pod \"f8fece2e-a547-4ef8-b1c6-6ada90c28798\" (UID: \"f8fece2e-a547-4ef8-b1c6-6ada90c28798\") " Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.182719 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8fece2e-a547-4ef8-b1c6-6ada90c28798-config" (OuterVolumeSpecName: "config") pod "f8fece2e-a547-4ef8-b1c6-6ada90c28798" (UID: "f8fece2e-a547-4ef8-b1c6-6ada90c28798"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.183106 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8fece2e-a547-4ef8-b1c6-6ada90c28798-client-ca" (OuterVolumeSpecName: "client-ca") pod "f8fece2e-a547-4ef8-b1c6-6ada90c28798" (UID: "f8fece2e-a547-4ef8-b1c6-6ada90c28798"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.191665 4760 scope.go:117] "RemoveContainer" containerID="43465163b57a22d2b4de99dc8ad88292270144138744901a869f09ba5a991301" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.192797 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8fece2e-a547-4ef8-b1c6-6ada90c28798-kube-api-access-8bxqg" (OuterVolumeSpecName: "kube-api-access-8bxqg") pod "f8fece2e-a547-4ef8-b1c6-6ada90c28798" (UID: "f8fece2e-a547-4ef8-b1c6-6ada90c28798"). InnerVolumeSpecName "kube-api-access-8bxqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.200878 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8fece2e-a547-4ef8-b1c6-6ada90c28798-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f8fece2e-a547-4ef8-b1c6-6ada90c28798" (UID: "f8fece2e-a547-4ef8-b1c6-6ada90c28798"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:47:36 crc kubenswrapper[4760]: E0226 09:47:36.212757 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43465163b57a22d2b4de99dc8ad88292270144138744901a869f09ba5a991301\": container with ID starting with 43465163b57a22d2b4de99dc8ad88292270144138744901a869f09ba5a991301 not found: ID does not exist" containerID="43465163b57a22d2b4de99dc8ad88292270144138744901a869f09ba5a991301" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.212834 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43465163b57a22d2b4de99dc8ad88292270144138744901a869f09ba5a991301"} err="failed to get container status \"43465163b57a22d2b4de99dc8ad88292270144138744901a869f09ba5a991301\": rpc error: code = NotFound desc = could not find container \"43465163b57a22d2b4de99dc8ad88292270144138744901a869f09ba5a991301\": container with ID starting with 43465163b57a22d2b4de99dc8ad88292270144138744901a869f09ba5a991301 not found: ID does not exist" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.280436 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d699cd71-d770-450f-839a-c4fe6a8d7520-client-ca\") pod \"route-controller-manager-586998dbcd-z9skc\" (UID: \"d699cd71-d770-450f-839a-c4fe6a8d7520\") " pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.280535 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d699cd71-d770-450f-839a-c4fe6a8d7520-serving-cert\") pod \"route-controller-manager-586998dbcd-z9skc\" (UID: \"d699cd71-d770-450f-839a-c4fe6a8d7520\") " pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.280603 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d699cd71-d770-450f-839a-c4fe6a8d7520-config\") pod \"route-controller-manager-586998dbcd-z9skc\" (UID: \"d699cd71-d770-450f-839a-c4fe6a8d7520\") " pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.280639 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pnkt\" (UniqueName: \"kubernetes.io/projected/d699cd71-d770-450f-839a-c4fe6a8d7520-kube-api-access-5pnkt\") pod \"route-controller-manager-586998dbcd-z9skc\" (UID: \"d699cd71-d770-450f-839a-c4fe6a8d7520\") " pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.280741 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8fece2e-a547-4ef8-b1c6-6ada90c28798-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.280761 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8fece2e-a547-4ef8-b1c6-6ada90c28798-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.280778 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8fece2e-a547-4ef8-b1c6-6ada90c28798-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.280794 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bxqg\" (UniqueName: \"kubernetes.io/projected/f8fece2e-a547-4ef8-b1c6-6ada90c28798-kube-api-access-8bxqg\") on node \"crc\" DevicePath \"\"" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.382534 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d699cd71-d770-450f-839a-c4fe6a8d7520-client-ca\") pod \"route-controller-manager-586998dbcd-z9skc\" (UID: \"d699cd71-d770-450f-839a-c4fe6a8d7520\") " pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.382634 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d699cd71-d770-450f-839a-c4fe6a8d7520-serving-cert\") pod \"route-controller-manager-586998dbcd-z9skc\" (UID: \"d699cd71-d770-450f-839a-c4fe6a8d7520\") " pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.382686 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d699cd71-d770-450f-839a-c4fe6a8d7520-config\") pod \"route-controller-manager-586998dbcd-z9skc\" (UID: \"d699cd71-d770-450f-839a-c4fe6a8d7520\") " pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.382723 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pnkt\" (UniqueName: \"kubernetes.io/projected/d699cd71-d770-450f-839a-c4fe6a8d7520-kube-api-access-5pnkt\") pod \"route-controller-manager-586998dbcd-z9skc\" (UID: \"d699cd71-d770-450f-839a-c4fe6a8d7520\") " pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.384387 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d699cd71-d770-450f-839a-c4fe6a8d7520-client-ca\") pod \"route-controller-manager-586998dbcd-z9skc\" (UID: \"d699cd71-d770-450f-839a-c4fe6a8d7520\") " pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.395217 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d699cd71-d770-450f-839a-c4fe6a8d7520-config\") pod \"route-controller-manager-586998dbcd-z9skc\" (UID: \"d699cd71-d770-450f-839a-c4fe6a8d7520\") " pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.395873 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d699cd71-d770-450f-839a-c4fe6a8d7520-serving-cert\") pod \"route-controller-manager-586998dbcd-z9skc\" (UID: \"d699cd71-d770-450f-839a-c4fe6a8d7520\") " pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.469097 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pnkt\" (UniqueName: \"kubernetes.io/projected/d699cd71-d770-450f-839a-c4fe6a8d7520-kube-api-access-5pnkt\") pod \"route-controller-manager-586998dbcd-z9skc\" (UID: \"d699cd71-d770-450f-839a-c4fe6a8d7520\") " pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.485848 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.502956 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr"] Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.546498 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6df4c84df5-rhvxr"] Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.609272 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f1a53a-8069-4e84-b214-44a7222f0d86-config\") pod \"16f1a53a-8069-4e84-b214-44a7222f0d86\" (UID: \"16f1a53a-8069-4e84-b214-44a7222f0d86\") " Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.609606 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16f1a53a-8069-4e84-b214-44a7222f0d86-client-ca\") pod \"16f1a53a-8069-4e84-b214-44a7222f0d86\" (UID: \"16f1a53a-8069-4e84-b214-44a7222f0d86\") " Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.609689 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16f1a53a-8069-4e84-b214-44a7222f0d86-serving-cert\") pod \"16f1a53a-8069-4e84-b214-44a7222f0d86\" (UID: \"16f1a53a-8069-4e84-b214-44a7222f0d86\") " Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.609734 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqkx4\" (UniqueName: \"kubernetes.io/projected/16f1a53a-8069-4e84-b214-44a7222f0d86-kube-api-access-rqkx4\") pod \"16f1a53a-8069-4e84-b214-44a7222f0d86\" (UID: \"16f1a53a-8069-4e84-b214-44a7222f0d86\") " Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.622419 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16f1a53a-8069-4e84-b214-44a7222f0d86-config" (OuterVolumeSpecName: "config") pod "16f1a53a-8069-4e84-b214-44a7222f0d86" (UID: "16f1a53a-8069-4e84-b214-44a7222f0d86"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.622627 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16f1a53a-8069-4e84-b214-44a7222f0d86-client-ca" (OuterVolumeSpecName: "client-ca") pod "16f1a53a-8069-4e84-b214-44a7222f0d86" (UID: "16f1a53a-8069-4e84-b214-44a7222f0d86"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.638781 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f1a53a-8069-4e84-b214-44a7222f0d86-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16f1a53a-8069-4e84-b214-44a7222f0d86" (UID: "16f1a53a-8069-4e84-b214-44a7222f0d86"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.639751 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16f1a53a-8069-4e84-b214-44a7222f0d86-proxy-ca-bundles\") pod \"16f1a53a-8069-4e84-b214-44a7222f0d86\" (UID: \"16f1a53a-8069-4e84-b214-44a7222f0d86\") " Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.640243 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f1a53a-8069-4e84-b214-44a7222f0d86-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.640261 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16f1a53a-8069-4e84-b214-44a7222f0d86-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.640273 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16f1a53a-8069-4e84-b214-44a7222f0d86-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.640747 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16f1a53a-8069-4e84-b214-44a7222f0d86-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "16f1a53a-8069-4e84-b214-44a7222f0d86" (UID: "16f1a53a-8069-4e84-b214-44a7222f0d86"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.653655 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8fece2e-a547-4ef8-b1c6-6ada90c28798" path="/var/lib/kubelet/pods/f8fece2e-a547-4ef8-b1c6-6ada90c28798/volumes" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.657674 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16f1a53a-8069-4e84-b214-44a7222f0d86-kube-api-access-rqkx4" (OuterVolumeSpecName: "kube-api-access-rqkx4") pod "16f1a53a-8069-4e84-b214-44a7222f0d86" (UID: "16f1a53a-8069-4e84-b214-44a7222f0d86"). InnerVolumeSpecName "kube-api-access-rqkx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.736742 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.742910 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqkx4\" (UniqueName: \"kubernetes.io/projected/16f1a53a-8069-4e84-b214-44a7222f0d86-kube-api-access-rqkx4\") on node \"crc\" DevicePath \"\"" Feb 26 09:47:36 crc kubenswrapper[4760]: I0226 09:47:36.742959 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16f1a53a-8069-4e84-b214-44a7222f0d86-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 09:47:37 crc kubenswrapper[4760]: I0226 09:47:37.028294 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 26 09:47:37 crc kubenswrapper[4760]: I0226 09:47:37.028757 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 09:47:37 crc kubenswrapper[4760]: I0226 09:47:37.047274 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 09:47:37 crc kubenswrapper[4760]: I0226 09:47:37.047391 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 09:47:37 crc kubenswrapper[4760]: I0226 09:47:37.047453 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 09:47:37 crc kubenswrapper[4760]: I0226 09:47:37.047513 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 09:47:37 crc kubenswrapper[4760]: I0226 09:47:37.047507 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 09:47:37 crc kubenswrapper[4760]: I0226 09:47:37.047554 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 09:47:37 crc kubenswrapper[4760]: I0226 09:47:37.047607 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 09:47:37 crc kubenswrapper[4760]: I0226 09:47:37.047659 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 09:47:37 crc kubenswrapper[4760]: I0226 09:47:37.047888 4760 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 26 09:47:37 crc kubenswrapper[4760]: I0226 09:47:37.047908 4760 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 26 09:47:37 crc kubenswrapper[4760]: I0226 09:47:37.047923 4760 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 26 09:47:37 crc kubenswrapper[4760]: I0226 09:47:37.047993 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 09:47:37 crc kubenswrapper[4760]: I0226 09:47:37.067058 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 09:47:37 crc kubenswrapper[4760]: I0226 09:47:37.136203 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc" event={"ID":"16f1a53a-8069-4e84-b214-44a7222f0d86","Type":"ContainerDied","Data":"41e847c4c570fa38bb4681bf052d84abe7e9ac1d14c0f9a643e9390a7d728499"} Feb 26 09:47:37 crc kubenswrapper[4760]: I0226 09:47:37.136257 4760 scope.go:117] "RemoveContainer" containerID="fc5b78b9123d10fe854bdf9418a6d16e22a0b05fec6a38a8860df61e5ac34b46" Feb 26 09:47:37 crc kubenswrapper[4760]: I0226 09:47:37.136402 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc" Feb 26 09:47:37 crc kubenswrapper[4760]: I0226 09:47:37.141626 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 26 09:47:37 crc kubenswrapper[4760]: I0226 09:47:37.141727 4760 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="890867ef08188658861a8fc0f649ff679e2733c6ddc952f2dce618ca2c4af0e6" exitCode=137 Feb 26 09:47:37 crc kubenswrapper[4760]: I0226 09:47:37.141807 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 09:47:37 crc kubenswrapper[4760]: I0226 09:47:37.149024 4760 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 26 09:47:37 crc kubenswrapper[4760]: I0226 09:47:37.149048 4760 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 26 09:47:37 crc kubenswrapper[4760]: I0226 09:47:37.159103 4760 scope.go:117] "RemoveContainer" containerID="890867ef08188658861a8fc0f649ff679e2733c6ddc952f2dce618ca2c4af0e6" Feb 26 09:47:37 crc kubenswrapper[4760]: I0226 09:47:37.189293 4760 scope.go:117] "RemoveContainer" containerID="890867ef08188658861a8fc0f649ff679e2733c6ddc952f2dce618ca2c4af0e6" Feb 26 09:47:37 crc kubenswrapper[4760]: E0226 09:47:37.208267 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"890867ef08188658861a8fc0f649ff679e2733c6ddc952f2dce618ca2c4af0e6\": container with ID starting with 890867ef08188658861a8fc0f649ff679e2733c6ddc952f2dce618ca2c4af0e6 not found: ID does not exist" containerID="890867ef08188658861a8fc0f649ff679e2733c6ddc952f2dce618ca2c4af0e6" Feb 26 09:47:37 crc kubenswrapper[4760]: I0226 09:47:37.208367 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"890867ef08188658861a8fc0f649ff679e2733c6ddc952f2dce618ca2c4af0e6"} err="failed to get container status \"890867ef08188658861a8fc0f649ff679e2733c6ddc952f2dce618ca2c4af0e6\": rpc error: code = NotFound desc = could not find container \"890867ef08188658861a8fc0f649ff679e2733c6ddc952f2dce618ca2c4af0e6\": container with ID starting with 890867ef08188658861a8fc0f649ff679e2733c6ddc952f2dce618ca2c4af0e6 not found: ID does not exist" Feb 26 09:47:37 crc kubenswrapper[4760]: I0226 09:47:37.226049 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc"] Feb 26 09:47:37 crc kubenswrapper[4760]: I0226 09:47:37.242761 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7c58c4bf8-mzvnc"] Feb 26 09:47:37 crc kubenswrapper[4760]: I0226 09:47:37.581632 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc"] Feb 26 09:47:38 crc kubenswrapper[4760]: I0226 09:47:38.161634 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc" event={"ID":"d699cd71-d770-450f-839a-c4fe6a8d7520","Type":"ContainerStarted","Data":"c638dad6e3603339a6add97d182d92e2bfc179899f749c331d094970a2fa613c"} Feb 26 09:47:38 crc kubenswrapper[4760]: I0226 09:47:38.161748 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc" event={"ID":"d699cd71-d770-450f-839a-c4fe6a8d7520","Type":"ContainerStarted","Data":"3c1fe2b395a801ef316573975789d25d34b6ff271e60641d2fc422d1d778d93d"} Feb 26 09:47:38 crc kubenswrapper[4760]: I0226 09:47:38.162296 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc" Feb 26 09:47:38 crc kubenswrapper[4760]: I0226 09:47:38.176414 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc" Feb 26 09:47:38 crc kubenswrapper[4760]: I0226 09:47:38.201974 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc" podStartSLOduration=4.201952726 podStartE2EDuration="4.201952726s" podCreationTimestamp="2026-02-26 09:47:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:47:38.198178528 +0000 UTC m=+303.332124021" watchObservedRunningTime="2026-02-26 09:47:38.201952726 +0000 UTC m=+303.335898219" Feb 26 09:47:38 crc kubenswrapper[4760]: I0226 09:47:38.617373 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16f1a53a-8069-4e84-b214-44a7222f0d86" path="/var/lib/kubelet/pods/16f1a53a-8069-4e84-b214-44a7222f0d86/volumes" Feb 26 09:47:38 crc kubenswrapper[4760]: I0226 09:47:38.618637 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 26 09:47:38 crc kubenswrapper[4760]: I0226 09:47:38.961384 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7666cf5956-9hv7t"] Feb 26 09:47:38 crc kubenswrapper[4760]: E0226 09:47:38.962370 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16f1a53a-8069-4e84-b214-44a7222f0d86" containerName="controller-manager" Feb 26 09:47:38 crc kubenswrapper[4760]: I0226 09:47:38.962408 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="16f1a53a-8069-4e84-b214-44a7222f0d86" containerName="controller-manager" Feb 26 09:47:38 crc kubenswrapper[4760]: I0226 09:47:38.962534 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="16f1a53a-8069-4e84-b214-44a7222f0d86" containerName="controller-manager" Feb 26 09:47:38 crc kubenswrapper[4760]: I0226 09:47:38.963041 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7666cf5956-9hv7t" Feb 26 09:47:38 crc kubenswrapper[4760]: I0226 09:47:38.979640 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 09:47:38 crc kubenswrapper[4760]: I0226 09:47:38.979998 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 09:47:38 crc kubenswrapper[4760]: I0226 09:47:38.980437 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 09:47:38 crc kubenswrapper[4760]: I0226 09:47:38.980470 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 09:47:38 crc kubenswrapper[4760]: I0226 09:47:38.981017 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 09:47:38 crc kubenswrapper[4760]: I0226 09:47:38.983701 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 09:47:39 crc kubenswrapper[4760]: I0226 09:47:38.998915 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06dfc84f-b165-4ee1-8d27-8de9f3960014-config\") pod \"controller-manager-7666cf5956-9hv7t\" (UID: \"06dfc84f-b165-4ee1-8d27-8de9f3960014\") " pod="openshift-controller-manager/controller-manager-7666cf5956-9hv7t" Feb 26 09:47:39 crc kubenswrapper[4760]: I0226 09:47:38.998974 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06dfc84f-b165-4ee1-8d27-8de9f3960014-client-ca\") pod \"controller-manager-7666cf5956-9hv7t\" (UID: \"06dfc84f-b165-4ee1-8d27-8de9f3960014\") " pod="openshift-controller-manager/controller-manager-7666cf5956-9hv7t" Feb 26 09:47:39 crc kubenswrapper[4760]: I0226 09:47:38.999013 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06dfc84f-b165-4ee1-8d27-8de9f3960014-proxy-ca-bundles\") pod \"controller-manager-7666cf5956-9hv7t\" (UID: \"06dfc84f-b165-4ee1-8d27-8de9f3960014\") " pod="openshift-controller-manager/controller-manager-7666cf5956-9hv7t" Feb 26 09:47:39 crc kubenswrapper[4760]: I0226 09:47:38.999097 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmsj2\" (UniqueName: \"kubernetes.io/projected/06dfc84f-b165-4ee1-8d27-8de9f3960014-kube-api-access-wmsj2\") pod \"controller-manager-7666cf5956-9hv7t\" (UID: \"06dfc84f-b165-4ee1-8d27-8de9f3960014\") " pod="openshift-controller-manager/controller-manager-7666cf5956-9hv7t" Feb 26 09:47:39 crc kubenswrapper[4760]: I0226 09:47:38.999141 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06dfc84f-b165-4ee1-8d27-8de9f3960014-serving-cert\") pod \"controller-manager-7666cf5956-9hv7t\" (UID: \"06dfc84f-b165-4ee1-8d27-8de9f3960014\") " pod="openshift-controller-manager/controller-manager-7666cf5956-9hv7t" Feb 26 09:47:39 crc kubenswrapper[4760]: I0226 09:47:39.017742 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 09:47:39 crc kubenswrapper[4760]: I0226 09:47:39.024972 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7666cf5956-9hv7t"] Feb 26 09:47:39 crc kubenswrapper[4760]: I0226 09:47:39.100117 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06dfc84f-b165-4ee1-8d27-8de9f3960014-config\") pod \"controller-manager-7666cf5956-9hv7t\" (UID: \"06dfc84f-b165-4ee1-8d27-8de9f3960014\") " pod="openshift-controller-manager/controller-manager-7666cf5956-9hv7t" Feb 26 09:47:39 crc kubenswrapper[4760]: I0226 09:47:39.100231 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06dfc84f-b165-4ee1-8d27-8de9f3960014-client-ca\") pod \"controller-manager-7666cf5956-9hv7t\" (UID: \"06dfc84f-b165-4ee1-8d27-8de9f3960014\") " pod="openshift-controller-manager/controller-manager-7666cf5956-9hv7t" Feb 26 09:47:39 crc kubenswrapper[4760]: I0226 09:47:39.100273 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06dfc84f-b165-4ee1-8d27-8de9f3960014-proxy-ca-bundles\") pod \"controller-manager-7666cf5956-9hv7t\" (UID: \"06dfc84f-b165-4ee1-8d27-8de9f3960014\") " pod="openshift-controller-manager/controller-manager-7666cf5956-9hv7t" Feb 26 09:47:39 crc kubenswrapper[4760]: I0226 09:47:39.100467 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmsj2\" (UniqueName: \"kubernetes.io/projected/06dfc84f-b165-4ee1-8d27-8de9f3960014-kube-api-access-wmsj2\") pod \"controller-manager-7666cf5956-9hv7t\" (UID: \"06dfc84f-b165-4ee1-8d27-8de9f3960014\") " pod="openshift-controller-manager/controller-manager-7666cf5956-9hv7t" Feb 26 09:47:39 crc kubenswrapper[4760]: I0226 09:47:39.100496 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06dfc84f-b165-4ee1-8d27-8de9f3960014-serving-cert\") pod \"controller-manager-7666cf5956-9hv7t\" (UID: \"06dfc84f-b165-4ee1-8d27-8de9f3960014\") " pod="openshift-controller-manager/controller-manager-7666cf5956-9hv7t" Feb 26 09:47:39 crc kubenswrapper[4760]: I0226 09:47:39.102651 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06dfc84f-b165-4ee1-8d27-8de9f3960014-proxy-ca-bundles\") pod \"controller-manager-7666cf5956-9hv7t\" (UID: \"06dfc84f-b165-4ee1-8d27-8de9f3960014\") " pod="openshift-controller-manager/controller-manager-7666cf5956-9hv7t" Feb 26 09:47:39 crc kubenswrapper[4760]: I0226 09:47:39.103350 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06dfc84f-b165-4ee1-8d27-8de9f3960014-client-ca\") pod \"controller-manager-7666cf5956-9hv7t\" (UID: \"06dfc84f-b165-4ee1-8d27-8de9f3960014\") " pod="openshift-controller-manager/controller-manager-7666cf5956-9hv7t" Feb 26 09:47:39 crc kubenswrapper[4760]: I0226 09:47:39.104359 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06dfc84f-b165-4ee1-8d27-8de9f3960014-config\") pod \"controller-manager-7666cf5956-9hv7t\" (UID: \"06dfc84f-b165-4ee1-8d27-8de9f3960014\") " pod="openshift-controller-manager/controller-manager-7666cf5956-9hv7t" Feb 26 09:47:39 crc kubenswrapper[4760]: I0226 09:47:39.111690 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06dfc84f-b165-4ee1-8d27-8de9f3960014-serving-cert\") pod \"controller-manager-7666cf5956-9hv7t\" (UID: \"06dfc84f-b165-4ee1-8d27-8de9f3960014\") " pod="openshift-controller-manager/controller-manager-7666cf5956-9hv7t" Feb 26 09:47:39 crc kubenswrapper[4760]: I0226 09:47:39.127533 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmsj2\" (UniqueName: \"kubernetes.io/projected/06dfc84f-b165-4ee1-8d27-8de9f3960014-kube-api-access-wmsj2\") pod \"controller-manager-7666cf5956-9hv7t\" (UID: \"06dfc84f-b165-4ee1-8d27-8de9f3960014\") " pod="openshift-controller-manager/controller-manager-7666cf5956-9hv7t" Feb 26 09:47:39 crc kubenswrapper[4760]: I0226 09:47:39.289746 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7666cf5956-9hv7t" Feb 26 09:47:40 crc kubenswrapper[4760]: I0226 09:47:40.808868 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7666cf5956-9hv7t"] Feb 26 09:47:41 crc kubenswrapper[4760]: I0226 09:47:41.768024 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7666cf5956-9hv7t" event={"ID":"06dfc84f-b165-4ee1-8d27-8de9f3960014","Type":"ContainerStarted","Data":"ce240a79b1f1e7bf827e440ec1f706825f12f1659957f60b5082ed5c3cee2cfa"} Feb 26 09:47:41 crc kubenswrapper[4760]: I0226 09:47:41.768404 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7666cf5956-9hv7t" event={"ID":"06dfc84f-b165-4ee1-8d27-8de9f3960014","Type":"ContainerStarted","Data":"2cf075c445ce7bc12c273620adaa81d7ba5616fa01cde716fbe4fd8b8c53617b"} Feb 26 09:47:41 crc kubenswrapper[4760]: I0226 09:47:41.769414 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7666cf5956-9hv7t" Feb 26 09:47:41 crc kubenswrapper[4760]: I0226 09:47:41.793322 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7666cf5956-9hv7t" Feb 26 09:47:41 crc kubenswrapper[4760]: I0226 09:47:41.858454 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7666cf5956-9hv7t" podStartSLOduration=7.858429862 podStartE2EDuration="7.858429862s" podCreationTimestamp="2026-02-26 09:47:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:47:41.848368019 +0000 UTC m=+306.982313532" watchObservedRunningTime="2026-02-26 09:47:41.858429862 +0000 UTC m=+306.992375355" Feb 26 09:47:54 crc kubenswrapper[4760]: I0226 09:47:54.288381 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc"] Feb 26 09:47:54 crc kubenswrapper[4760]: I0226 09:47:54.290177 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc" podUID="d699cd71-d770-450f-839a-c4fe6a8d7520" containerName="route-controller-manager" containerID="cri-o://c638dad6e3603339a6add97d182d92e2bfc179899f749c331d094970a2fa613c" gracePeriod=30 Feb 26 09:47:54 crc kubenswrapper[4760]: I0226 09:47:54.768599 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc" Feb 26 09:47:54 crc kubenswrapper[4760]: I0226 09:47:54.891890 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pnkt\" (UniqueName: \"kubernetes.io/projected/d699cd71-d770-450f-839a-c4fe6a8d7520-kube-api-access-5pnkt\") pod \"d699cd71-d770-450f-839a-c4fe6a8d7520\" (UID: \"d699cd71-d770-450f-839a-c4fe6a8d7520\") " Feb 26 09:47:54 crc kubenswrapper[4760]: I0226 09:47:54.892062 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d699cd71-d770-450f-839a-c4fe6a8d7520-client-ca\") pod \"d699cd71-d770-450f-839a-c4fe6a8d7520\" (UID: \"d699cd71-d770-450f-839a-c4fe6a8d7520\") " Feb 26 09:47:54 crc kubenswrapper[4760]: I0226 09:47:54.892096 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d699cd71-d770-450f-839a-c4fe6a8d7520-serving-cert\") pod \"d699cd71-d770-450f-839a-c4fe6a8d7520\" (UID: \"d699cd71-d770-450f-839a-c4fe6a8d7520\") " Feb 26 09:47:54 crc kubenswrapper[4760]: I0226 09:47:54.892131 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d699cd71-d770-450f-839a-c4fe6a8d7520-config\") pod \"d699cd71-d770-450f-839a-c4fe6a8d7520\" (UID: \"d699cd71-d770-450f-839a-c4fe6a8d7520\") " Feb 26 09:47:54 crc kubenswrapper[4760]: I0226 09:47:54.892849 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d699cd71-d770-450f-839a-c4fe6a8d7520-client-ca" (OuterVolumeSpecName: "client-ca") pod "d699cd71-d770-450f-839a-c4fe6a8d7520" (UID: "d699cd71-d770-450f-839a-c4fe6a8d7520"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:47:54 crc kubenswrapper[4760]: I0226 09:47:54.892982 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d699cd71-d770-450f-839a-c4fe6a8d7520-config" (OuterVolumeSpecName: "config") pod "d699cd71-d770-450f-839a-c4fe6a8d7520" (UID: "d699cd71-d770-450f-839a-c4fe6a8d7520"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:47:54 crc kubenswrapper[4760]: I0226 09:47:54.901980 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d699cd71-d770-450f-839a-c4fe6a8d7520-kube-api-access-5pnkt" (OuterVolumeSpecName: "kube-api-access-5pnkt") pod "d699cd71-d770-450f-839a-c4fe6a8d7520" (UID: "d699cd71-d770-450f-839a-c4fe6a8d7520"). InnerVolumeSpecName "kube-api-access-5pnkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:47:54 crc kubenswrapper[4760]: I0226 09:47:54.902173 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d699cd71-d770-450f-839a-c4fe6a8d7520-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d699cd71-d770-450f-839a-c4fe6a8d7520" (UID: "d699cd71-d770-450f-839a-c4fe6a8d7520"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:47:54 crc kubenswrapper[4760]: I0226 09:47:54.994025 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d699cd71-d770-450f-839a-c4fe6a8d7520-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:47:54 crc kubenswrapper[4760]: I0226 09:47:54.994075 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d699cd71-d770-450f-839a-c4fe6a8d7520-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:47:54 crc kubenswrapper[4760]: I0226 09:47:54.994094 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d699cd71-d770-450f-839a-c4fe6a8d7520-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:47:54 crc kubenswrapper[4760]: I0226 09:47:54.994107 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pnkt\" (UniqueName: \"kubernetes.io/projected/d699cd71-d770-450f-839a-c4fe6a8d7520-kube-api-access-5pnkt\") on node \"crc\" DevicePath \"\"" Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.245007 4760 generic.go:334] "Generic (PLEG): container finished" podID="d699cd71-d770-450f-839a-c4fe6a8d7520" containerID="c638dad6e3603339a6add97d182d92e2bfc179899f749c331d094970a2fa613c" exitCode=0 Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.245058 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc" event={"ID":"d699cd71-d770-450f-839a-c4fe6a8d7520","Type":"ContainerDied","Data":"c638dad6e3603339a6add97d182d92e2bfc179899f749c331d094970a2fa613c"} Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.245101 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc" event={"ID":"d699cd71-d770-450f-839a-c4fe6a8d7520","Type":"ContainerDied","Data":"3c1fe2b395a801ef316573975789d25d34b6ff271e60641d2fc422d1d778d93d"} Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.245105 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc" Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.245128 4760 scope.go:117] "RemoveContainer" containerID="c638dad6e3603339a6add97d182d92e2bfc179899f749c331d094970a2fa613c" Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.264648 4760 scope.go:117] "RemoveContainer" containerID="c638dad6e3603339a6add97d182d92e2bfc179899f749c331d094970a2fa613c" Feb 26 09:47:55 crc kubenswrapper[4760]: E0226 09:47:55.265188 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c638dad6e3603339a6add97d182d92e2bfc179899f749c331d094970a2fa613c\": container with ID starting with c638dad6e3603339a6add97d182d92e2bfc179899f749c331d094970a2fa613c not found: ID does not exist" containerID="c638dad6e3603339a6add97d182d92e2bfc179899f749c331d094970a2fa613c" Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.265250 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c638dad6e3603339a6add97d182d92e2bfc179899f749c331d094970a2fa613c"} err="failed to get container status \"c638dad6e3603339a6add97d182d92e2bfc179899f749c331d094970a2fa613c\": rpc error: code = NotFound desc = could not find container \"c638dad6e3603339a6add97d182d92e2bfc179899f749c331d094970a2fa613c\": container with ID starting with c638dad6e3603339a6add97d182d92e2bfc179899f749c331d094970a2fa613c not found: ID does not exist" Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.280792 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc"] Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.288764 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-586998dbcd-z9skc"] Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.367850 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn"] Feb 26 09:47:55 crc kubenswrapper[4760]: E0226 09:47:55.368127 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d699cd71-d770-450f-839a-c4fe6a8d7520" containerName="route-controller-manager" Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.368143 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d699cd71-d770-450f-839a-c4fe6a8d7520" containerName="route-controller-manager" Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.368302 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d699cd71-d770-450f-839a-c4fe6a8d7520" containerName="route-controller-manager" Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.368861 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn" Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.371969 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.372346 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.372620 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.372662 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.372812 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.372890 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.381548 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn"] Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.501150 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ce8af5-916a-4a44-b105-9e52cf212e94-config\") pod \"route-controller-manager-6cf858fd97-rcvzn\" (UID: \"85ce8af5-916a-4a44-b105-9e52cf212e94\") " pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn" Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.501803 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6ztv\" (UniqueName: \"kubernetes.io/projected/85ce8af5-916a-4a44-b105-9e52cf212e94-kube-api-access-n6ztv\") pod \"route-controller-manager-6cf858fd97-rcvzn\" (UID: \"85ce8af5-916a-4a44-b105-9e52cf212e94\") " pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn" Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.501849 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ce8af5-916a-4a44-b105-9e52cf212e94-client-ca\") pod \"route-controller-manager-6cf858fd97-rcvzn\" (UID: \"85ce8af5-916a-4a44-b105-9e52cf212e94\") " pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn" Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.502080 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ce8af5-916a-4a44-b105-9e52cf212e94-serving-cert\") pod \"route-controller-manager-6cf858fd97-rcvzn\" (UID: \"85ce8af5-916a-4a44-b105-9e52cf212e94\") " pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn" Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.603055 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ce8af5-916a-4a44-b105-9e52cf212e94-client-ca\") pod \"route-controller-manager-6cf858fd97-rcvzn\" (UID: \"85ce8af5-916a-4a44-b105-9e52cf212e94\") " pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn" Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.603463 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ce8af5-916a-4a44-b105-9e52cf212e94-serving-cert\") pod \"route-controller-manager-6cf858fd97-rcvzn\" (UID: \"85ce8af5-916a-4a44-b105-9e52cf212e94\") " pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn" Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.603648 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ce8af5-916a-4a44-b105-9e52cf212e94-config\") pod \"route-controller-manager-6cf858fd97-rcvzn\" (UID: \"85ce8af5-916a-4a44-b105-9e52cf212e94\") " pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn" Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.603752 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6ztv\" (UniqueName: \"kubernetes.io/projected/85ce8af5-916a-4a44-b105-9e52cf212e94-kube-api-access-n6ztv\") pod \"route-controller-manager-6cf858fd97-rcvzn\" (UID: \"85ce8af5-916a-4a44-b105-9e52cf212e94\") " pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn" Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.604226 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ce8af5-916a-4a44-b105-9e52cf212e94-client-ca\") pod \"route-controller-manager-6cf858fd97-rcvzn\" (UID: \"85ce8af5-916a-4a44-b105-9e52cf212e94\") " pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn" Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.604974 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ce8af5-916a-4a44-b105-9e52cf212e94-config\") pod \"route-controller-manager-6cf858fd97-rcvzn\" (UID: \"85ce8af5-916a-4a44-b105-9e52cf212e94\") " pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn" Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.608238 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ce8af5-916a-4a44-b105-9e52cf212e94-serving-cert\") pod \"route-controller-manager-6cf858fd97-rcvzn\" (UID: \"85ce8af5-916a-4a44-b105-9e52cf212e94\") " pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn" Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.622202 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6ztv\" (UniqueName: \"kubernetes.io/projected/85ce8af5-916a-4a44-b105-9e52cf212e94-kube-api-access-n6ztv\") pod \"route-controller-manager-6cf858fd97-rcvzn\" (UID: \"85ce8af5-916a-4a44-b105-9e52cf212e94\") " pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn" Feb 26 09:47:55 crc kubenswrapper[4760]: I0226 09:47:55.685464 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn" Feb 26 09:47:56 crc kubenswrapper[4760]: I0226 09:47:56.076960 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn"] Feb 26 09:47:56 crc kubenswrapper[4760]: I0226 09:47:56.252831 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn" event={"ID":"85ce8af5-916a-4a44-b105-9e52cf212e94","Type":"ContainerStarted","Data":"0dee8027e5232f6c117bf552d814fef2621055345781df8dd1076079aa8a92d0"} Feb 26 09:47:56 crc kubenswrapper[4760]: I0226 09:47:56.582869 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d699cd71-d770-450f-839a-c4fe6a8d7520" path="/var/lib/kubelet/pods/d699cd71-d770-450f-839a-c4fe6a8d7520/volumes" Feb 26 09:47:57 crc kubenswrapper[4760]: I0226 09:47:57.259977 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn" event={"ID":"85ce8af5-916a-4a44-b105-9e52cf212e94","Type":"ContainerStarted","Data":"937fa12d1432b906c19252ff62df835021882c678c841f9296300069a26197eb"} Feb 26 09:47:57 crc kubenswrapper[4760]: I0226 09:47:57.260320 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn" Feb 26 09:47:57 crc kubenswrapper[4760]: I0226 09:47:57.267132 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn" Feb 26 09:47:57 crc kubenswrapper[4760]: I0226 09:47:57.292730 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn" podStartSLOduration=3.292701276 podStartE2EDuration="3.292701276s" podCreationTimestamp="2026-02-26 09:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:47:57.284690607 +0000 UTC m=+322.418636150" watchObservedRunningTime="2026-02-26 09:47:57.292701276 +0000 UTC m=+322.426646789" Feb 26 09:48:00 crc kubenswrapper[4760]: I0226 09:48:00.144018 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534988-qlzhw"] Feb 26 09:48:00 crc kubenswrapper[4760]: I0226 09:48:00.145344 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534988-qlzhw" Feb 26 09:48:00 crc kubenswrapper[4760]: I0226 09:48:00.148310 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:48:00 crc kubenswrapper[4760]: I0226 09:48:00.148632 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:48:00 crc kubenswrapper[4760]: I0226 09:48:00.153073 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534988-qlzhw"] Feb 26 09:48:00 crc kubenswrapper[4760]: I0226 09:48:00.153313 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jn6zk" Feb 26 09:48:00 crc kubenswrapper[4760]: I0226 09:48:00.277463 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp45j\" (UniqueName: \"kubernetes.io/projected/2a8da2b9-3f31-4b08-bccc-57458d7ed615-kube-api-access-mp45j\") pod \"auto-csr-approver-29534988-qlzhw\" (UID: \"2a8da2b9-3f31-4b08-bccc-57458d7ed615\") " pod="openshift-infra/auto-csr-approver-29534988-qlzhw" Feb 26 09:48:00 crc kubenswrapper[4760]: I0226 09:48:00.378967 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp45j\" (UniqueName: \"kubernetes.io/projected/2a8da2b9-3f31-4b08-bccc-57458d7ed615-kube-api-access-mp45j\") pod \"auto-csr-approver-29534988-qlzhw\" (UID: \"2a8da2b9-3f31-4b08-bccc-57458d7ed615\") " pod="openshift-infra/auto-csr-approver-29534988-qlzhw" Feb 26 09:48:00 crc kubenswrapper[4760]: I0226 09:48:00.401536 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp45j\" (UniqueName: \"kubernetes.io/projected/2a8da2b9-3f31-4b08-bccc-57458d7ed615-kube-api-access-mp45j\") pod \"auto-csr-approver-29534988-qlzhw\" (UID: \"2a8da2b9-3f31-4b08-bccc-57458d7ed615\") " pod="openshift-infra/auto-csr-approver-29534988-qlzhw" Feb 26 09:48:00 crc kubenswrapper[4760]: I0226 09:48:00.472990 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534988-qlzhw" Feb 26 09:48:00 crc kubenswrapper[4760]: I0226 09:48:00.903509 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534988-qlzhw"] Feb 26 09:48:01 crc kubenswrapper[4760]: I0226 09:48:01.288968 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534988-qlzhw" event={"ID":"2a8da2b9-3f31-4b08-bccc-57458d7ed615","Type":"ContainerStarted","Data":"9ab540b5f29db7913adee9206aa441d90be17bb3e67c7ca809ea4ed697dc885f"} Feb 26 09:48:02 crc kubenswrapper[4760]: I0226 09:48:02.935184 4760 csr.go:261] certificate signing request csr-jcctm is approved, waiting to be issued Feb 26 09:48:02 crc kubenswrapper[4760]: I0226 09:48:02.959669 4760 csr.go:257] certificate signing request csr-jcctm is issued Feb 26 09:48:03 crc kubenswrapper[4760]: I0226 09:48:03.303953 4760 generic.go:334] "Generic (PLEG): container finished" podID="2a8da2b9-3f31-4b08-bccc-57458d7ed615" containerID="b17ce6670a069f4288d14a90041ce714139e0626b5786e2e07f81208e1f1ce26" exitCode=0 Feb 26 09:48:03 crc kubenswrapper[4760]: I0226 09:48:03.304029 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534988-qlzhw" event={"ID":"2a8da2b9-3f31-4b08-bccc-57458d7ed615","Type":"ContainerDied","Data":"b17ce6670a069f4288d14a90041ce714139e0626b5786e2e07f81208e1f1ce26"} Feb 26 09:48:03 crc kubenswrapper[4760]: I0226 09:48:03.961118 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-26 04:09:27.868675799 +0000 UTC Feb 26 09:48:03 crc kubenswrapper[4760]: I0226 09:48:03.961192 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7266h21m23.907488937s for next certificate rotation Feb 26 09:48:04 crc kubenswrapper[4760]: I0226 09:48:04.692485 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534988-qlzhw" Feb 26 09:48:04 crc kubenswrapper[4760]: I0226 09:48:04.842871 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp45j\" (UniqueName: \"kubernetes.io/projected/2a8da2b9-3f31-4b08-bccc-57458d7ed615-kube-api-access-mp45j\") pod \"2a8da2b9-3f31-4b08-bccc-57458d7ed615\" (UID: \"2a8da2b9-3f31-4b08-bccc-57458d7ed615\") " Feb 26 09:48:04 crc kubenswrapper[4760]: I0226 09:48:04.849964 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a8da2b9-3f31-4b08-bccc-57458d7ed615-kube-api-access-mp45j" (OuterVolumeSpecName: "kube-api-access-mp45j") pod "2a8da2b9-3f31-4b08-bccc-57458d7ed615" (UID: "2a8da2b9-3f31-4b08-bccc-57458d7ed615"). InnerVolumeSpecName "kube-api-access-mp45j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:48:04 crc kubenswrapper[4760]: I0226 09:48:04.944334 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp45j\" (UniqueName: \"kubernetes.io/projected/2a8da2b9-3f31-4b08-bccc-57458d7ed615-kube-api-access-mp45j\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:04 crc kubenswrapper[4760]: I0226 09:48:04.961663 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-17 00:12:49.080184223 +0000 UTC Feb 26 09:48:04 crc kubenswrapper[4760]: I0226 09:48:04.961698 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7790h24m44.118489359s for next certificate rotation Feb 26 09:48:05 crc kubenswrapper[4760]: I0226 09:48:05.320336 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534988-qlzhw" event={"ID":"2a8da2b9-3f31-4b08-bccc-57458d7ed615","Type":"ContainerDied","Data":"9ab540b5f29db7913adee9206aa441d90be17bb3e67c7ca809ea4ed697dc885f"} Feb 26 09:48:05 crc kubenswrapper[4760]: I0226 09:48:05.320389 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ab540b5f29db7913adee9206aa441d90be17bb3e67c7ca809ea4ed697dc885f" Feb 26 09:48:05 crc kubenswrapper[4760]: I0226 09:48:05.320390 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534988-qlzhw" Feb 26 09:48:14 crc kubenswrapper[4760]: I0226 09:48:14.275824 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn"] Feb 26 09:48:14 crc kubenswrapper[4760]: I0226 09:48:14.277299 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn" podUID="85ce8af5-916a-4a44-b105-9e52cf212e94" containerName="route-controller-manager" containerID="cri-o://937fa12d1432b906c19252ff62df835021882c678c841f9296300069a26197eb" gracePeriod=30 Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.241206 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.377235 4760 generic.go:334] "Generic (PLEG): container finished" podID="85ce8af5-916a-4a44-b105-9e52cf212e94" containerID="937fa12d1432b906c19252ff62df835021882c678c841f9296300069a26197eb" exitCode=0 Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.377273 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn" event={"ID":"85ce8af5-916a-4a44-b105-9e52cf212e94","Type":"ContainerDied","Data":"937fa12d1432b906c19252ff62df835021882c678c841f9296300069a26197eb"} Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.377295 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.377320 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn" event={"ID":"85ce8af5-916a-4a44-b105-9e52cf212e94","Type":"ContainerDied","Data":"0dee8027e5232f6c117bf552d814fef2621055345781df8dd1076079aa8a92d0"} Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.377370 4760 scope.go:117] "RemoveContainer" containerID="937fa12d1432b906c19252ff62df835021882c678c841f9296300069a26197eb" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.383904 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-586998dbcd-gzn6f"] Feb 26 09:48:15 crc kubenswrapper[4760]: E0226 09:48:15.384183 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a8da2b9-3f31-4b08-bccc-57458d7ed615" containerName="oc" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.384207 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a8da2b9-3f31-4b08-bccc-57458d7ed615" containerName="oc" Feb 26 09:48:15 crc kubenswrapper[4760]: E0226 09:48:15.384219 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85ce8af5-916a-4a44-b105-9e52cf212e94" containerName="route-controller-manager" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.384228 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="85ce8af5-916a-4a44-b105-9e52cf212e94" containerName="route-controller-manager" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.384364 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a8da2b9-3f31-4b08-bccc-57458d7ed615" containerName="oc" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.384378 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="85ce8af5-916a-4a44-b105-9e52cf212e94" containerName="route-controller-manager" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.384928 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-gzn6f" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.389608 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ce8af5-916a-4a44-b105-9e52cf212e94-config\") pod \"85ce8af5-916a-4a44-b105-9e52cf212e94\" (UID: \"85ce8af5-916a-4a44-b105-9e52cf212e94\") " Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.389792 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6ztv\" (UniqueName: \"kubernetes.io/projected/85ce8af5-916a-4a44-b105-9e52cf212e94-kube-api-access-n6ztv\") pod \"85ce8af5-916a-4a44-b105-9e52cf212e94\" (UID: \"85ce8af5-916a-4a44-b105-9e52cf212e94\") " Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.389854 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ce8af5-916a-4a44-b105-9e52cf212e94-serving-cert\") pod \"85ce8af5-916a-4a44-b105-9e52cf212e94\" (UID: \"85ce8af5-916a-4a44-b105-9e52cf212e94\") " Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.390640 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ce8af5-916a-4a44-b105-9e52cf212e94-config" (OuterVolumeSpecName: "config") pod "85ce8af5-916a-4a44-b105-9e52cf212e94" (UID: "85ce8af5-916a-4a44-b105-9e52cf212e94"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.390950 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ce8af5-916a-4a44-b105-9e52cf212e94-client-ca\") pod \"85ce8af5-916a-4a44-b105-9e52cf212e94\" (UID: \"85ce8af5-916a-4a44-b105-9e52cf212e94\") " Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.391690 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ce8af5-916a-4a44-b105-9e52cf212e94-config\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.391900 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ce8af5-916a-4a44-b105-9e52cf212e94-client-ca" (OuterVolumeSpecName: "client-ca") pod "85ce8af5-916a-4a44-b105-9e52cf212e94" (UID: "85ce8af5-916a-4a44-b105-9e52cf212e94"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.394855 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85ce8af5-916a-4a44-b105-9e52cf212e94-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "85ce8af5-916a-4a44-b105-9e52cf212e94" (UID: "85ce8af5-916a-4a44-b105-9e52cf212e94"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.394952 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85ce8af5-916a-4a44-b105-9e52cf212e94-kube-api-access-n6ztv" (OuterVolumeSpecName: "kube-api-access-n6ztv") pod "85ce8af5-916a-4a44-b105-9e52cf212e94" (UID: "85ce8af5-916a-4a44-b105-9e52cf212e94"). InnerVolumeSpecName "kube-api-access-n6ztv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.405291 4760 scope.go:117] "RemoveContainer" containerID="937fa12d1432b906c19252ff62df835021882c678c841f9296300069a26197eb" Feb 26 09:48:15 crc kubenswrapper[4760]: E0226 09:48:15.405683 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"937fa12d1432b906c19252ff62df835021882c678c841f9296300069a26197eb\": container with ID starting with 937fa12d1432b906c19252ff62df835021882c678c841f9296300069a26197eb not found: ID does not exist" containerID="937fa12d1432b906c19252ff62df835021882c678c841f9296300069a26197eb" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.405723 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"937fa12d1432b906c19252ff62df835021882c678c841f9296300069a26197eb"} err="failed to get container status \"937fa12d1432b906c19252ff62df835021882c678c841f9296300069a26197eb\": rpc error: code = NotFound desc = could not find container \"937fa12d1432b906c19252ff62df835021882c678c841f9296300069a26197eb\": container with ID starting with 937fa12d1432b906c19252ff62df835021882c678c841f9296300069a26197eb not found: ID does not exist" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.408208 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-586998dbcd-gzn6f"] Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.492385 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3218b080-c9d5-44ae-ba26-696bbf759f95-client-ca\") pod \"route-controller-manager-586998dbcd-gzn6f\" (UID: \"3218b080-c9d5-44ae-ba26-696bbf759f95\") " pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-gzn6f" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.492449 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6q6q\" (UniqueName: \"kubernetes.io/projected/3218b080-c9d5-44ae-ba26-696bbf759f95-kube-api-access-h6q6q\") pod \"route-controller-manager-586998dbcd-gzn6f\" (UID: \"3218b080-c9d5-44ae-ba26-696bbf759f95\") " pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-gzn6f" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.492475 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3218b080-c9d5-44ae-ba26-696bbf759f95-config\") pod \"route-controller-manager-586998dbcd-gzn6f\" (UID: \"3218b080-c9d5-44ae-ba26-696bbf759f95\") " pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-gzn6f" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.492526 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3218b080-c9d5-44ae-ba26-696bbf759f95-serving-cert\") pod \"route-controller-manager-586998dbcd-gzn6f\" (UID: \"3218b080-c9d5-44ae-ba26-696bbf759f95\") " pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-gzn6f" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.492657 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6ztv\" (UniqueName: \"kubernetes.io/projected/85ce8af5-916a-4a44-b105-9e52cf212e94-kube-api-access-n6ztv\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.492669 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ce8af5-916a-4a44-b105-9e52cf212e94-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.492694 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ce8af5-916a-4a44-b105-9e52cf212e94-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.593737 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6q6q\" (UniqueName: \"kubernetes.io/projected/3218b080-c9d5-44ae-ba26-696bbf759f95-kube-api-access-h6q6q\") pod \"route-controller-manager-586998dbcd-gzn6f\" (UID: \"3218b080-c9d5-44ae-ba26-696bbf759f95\") " pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-gzn6f" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.594003 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3218b080-c9d5-44ae-ba26-696bbf759f95-config\") pod \"route-controller-manager-586998dbcd-gzn6f\" (UID: \"3218b080-c9d5-44ae-ba26-696bbf759f95\") " pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-gzn6f" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.594023 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3218b080-c9d5-44ae-ba26-696bbf759f95-serving-cert\") pod \"route-controller-manager-586998dbcd-gzn6f\" (UID: \"3218b080-c9d5-44ae-ba26-696bbf759f95\") " pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-gzn6f" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.594084 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3218b080-c9d5-44ae-ba26-696bbf759f95-client-ca\") pod \"route-controller-manager-586998dbcd-gzn6f\" (UID: \"3218b080-c9d5-44ae-ba26-696bbf759f95\") " pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-gzn6f" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.595410 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3218b080-c9d5-44ae-ba26-696bbf759f95-client-ca\") pod \"route-controller-manager-586998dbcd-gzn6f\" (UID: \"3218b080-c9d5-44ae-ba26-696bbf759f95\") " pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-gzn6f" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.595631 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3218b080-c9d5-44ae-ba26-696bbf759f95-config\") pod \"route-controller-manager-586998dbcd-gzn6f\" (UID: \"3218b080-c9d5-44ae-ba26-696bbf759f95\") " pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-gzn6f" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.598637 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3218b080-c9d5-44ae-ba26-696bbf759f95-serving-cert\") pod \"route-controller-manager-586998dbcd-gzn6f\" (UID: \"3218b080-c9d5-44ae-ba26-696bbf759f95\") " pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-gzn6f" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.612085 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6q6q\" (UniqueName: \"kubernetes.io/projected/3218b080-c9d5-44ae-ba26-696bbf759f95-kube-api-access-h6q6q\") pod \"route-controller-manager-586998dbcd-gzn6f\" (UID: \"3218b080-c9d5-44ae-ba26-696bbf759f95\") " pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-gzn6f" Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.705453 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn"] Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.712643 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf858fd97-rcvzn"] Feb 26 09:48:15 crc kubenswrapper[4760]: I0226 09:48:15.725318 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-gzn6f" Feb 26 09:48:16 crc kubenswrapper[4760]: I0226 09:48:16.108202 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-586998dbcd-gzn6f"] Feb 26 09:48:16 crc kubenswrapper[4760]: I0226 09:48:16.385078 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-gzn6f" event={"ID":"3218b080-c9d5-44ae-ba26-696bbf759f95","Type":"ContainerStarted","Data":"0df120ed4f54e02db9028f0d16ed15cf1cefcb2d2424a8b9ad929e655699c47e"} Feb 26 09:48:16 crc kubenswrapper[4760]: I0226 09:48:16.585405 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85ce8af5-916a-4a44-b105-9e52cf212e94" path="/var/lib/kubelet/pods/85ce8af5-916a-4a44-b105-9e52cf212e94/volumes" Feb 26 09:48:17 crc kubenswrapper[4760]: I0226 09:48:17.399298 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-gzn6f" event={"ID":"3218b080-c9d5-44ae-ba26-696bbf759f95","Type":"ContainerStarted","Data":"faf560737e214b52e3cd7d05b4336f6fbef3c81ec8d099d94e28b6db43572222"} Feb 26 09:48:17 crc kubenswrapper[4760]: I0226 09:48:17.399976 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-gzn6f" Feb 26 09:48:17 crc kubenswrapper[4760]: I0226 09:48:17.405951 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-gzn6f" Feb 26 09:48:17 crc kubenswrapper[4760]: I0226 09:48:17.419563 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-586998dbcd-gzn6f" podStartSLOduration=3.419543878 podStartE2EDuration="3.419543878s" podCreationTimestamp="2026-02-26 09:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:48:17.417719313 +0000 UTC m=+342.551664826" watchObservedRunningTime="2026-02-26 09:48:17.419543878 +0000 UTC m=+342.553489381" Feb 26 09:48:44 crc kubenswrapper[4760]: I0226 09:48:44.939326 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vk6cw"] Feb 26 09:48:44 crc kubenswrapper[4760]: I0226 09:48:44.940547 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" Feb 26 09:48:44 crc kubenswrapper[4760]: I0226 09:48:44.949513 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vk6cw"] Feb 26 09:48:45 crc kubenswrapper[4760]: I0226 09:48:45.126981 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33ea9c7a-7eac-4aab-bb2e-ae31955a84b5-trusted-ca\") pod \"image-registry-66df7c8f76-vk6cw\" (UID: \"33ea9c7a-7eac-4aab-bb2e-ae31955a84b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" Feb 26 09:48:45 crc kubenswrapper[4760]: I0226 09:48:45.127029 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/33ea9c7a-7eac-4aab-bb2e-ae31955a84b5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vk6cw\" (UID: \"33ea9c7a-7eac-4aab-bb2e-ae31955a84b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" Feb 26 09:48:45 crc kubenswrapper[4760]: I0226 09:48:45.127056 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/33ea9c7a-7eac-4aab-bb2e-ae31955a84b5-bound-sa-token\") pod \"image-registry-66df7c8f76-vk6cw\" (UID: \"33ea9c7a-7eac-4aab-bb2e-ae31955a84b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" Feb 26 09:48:45 crc kubenswrapper[4760]: I0226 09:48:45.127080 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/33ea9c7a-7eac-4aab-bb2e-ae31955a84b5-registry-tls\") pod \"image-registry-66df7c8f76-vk6cw\" (UID: \"33ea9c7a-7eac-4aab-bb2e-ae31955a84b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" Feb 26 09:48:45 crc kubenswrapper[4760]: I0226 09:48:45.127108 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vk6cw\" (UID: \"33ea9c7a-7eac-4aab-bb2e-ae31955a84b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" Feb 26 09:48:45 crc kubenswrapper[4760]: I0226 09:48:45.127253 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/33ea9c7a-7eac-4aab-bb2e-ae31955a84b5-registry-certificates\") pod \"image-registry-66df7c8f76-vk6cw\" (UID: \"33ea9c7a-7eac-4aab-bb2e-ae31955a84b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" Feb 26 09:48:45 crc kubenswrapper[4760]: I0226 09:48:45.127395 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/33ea9c7a-7eac-4aab-bb2e-ae31955a84b5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vk6cw\" (UID: \"33ea9c7a-7eac-4aab-bb2e-ae31955a84b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" Feb 26 09:48:45 crc kubenswrapper[4760]: I0226 09:48:45.127488 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2crxs\" (UniqueName: \"kubernetes.io/projected/33ea9c7a-7eac-4aab-bb2e-ae31955a84b5-kube-api-access-2crxs\") pod \"image-registry-66df7c8f76-vk6cw\" (UID: \"33ea9c7a-7eac-4aab-bb2e-ae31955a84b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" Feb 26 09:48:45 crc kubenswrapper[4760]: I0226 09:48:45.151870 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vk6cw\" (UID: \"33ea9c7a-7eac-4aab-bb2e-ae31955a84b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" Feb 26 09:48:45 crc kubenswrapper[4760]: I0226 09:48:45.228601 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33ea9c7a-7eac-4aab-bb2e-ae31955a84b5-trusted-ca\") pod \"image-registry-66df7c8f76-vk6cw\" (UID: \"33ea9c7a-7eac-4aab-bb2e-ae31955a84b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" Feb 26 09:48:45 crc kubenswrapper[4760]: I0226 09:48:45.228909 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/33ea9c7a-7eac-4aab-bb2e-ae31955a84b5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vk6cw\" (UID: \"33ea9c7a-7eac-4aab-bb2e-ae31955a84b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" Feb 26 09:48:45 crc kubenswrapper[4760]: I0226 09:48:45.228931 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/33ea9c7a-7eac-4aab-bb2e-ae31955a84b5-bound-sa-token\") pod \"image-registry-66df7c8f76-vk6cw\" (UID: \"33ea9c7a-7eac-4aab-bb2e-ae31955a84b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" Feb 26 09:48:45 crc kubenswrapper[4760]: I0226 09:48:45.228957 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/33ea9c7a-7eac-4aab-bb2e-ae31955a84b5-registry-tls\") pod \"image-registry-66df7c8f76-vk6cw\" (UID: \"33ea9c7a-7eac-4aab-bb2e-ae31955a84b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" Feb 26 09:48:45 crc kubenswrapper[4760]: I0226 09:48:45.228981 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/33ea9c7a-7eac-4aab-bb2e-ae31955a84b5-registry-certificates\") pod \"image-registry-66df7c8f76-vk6cw\" (UID: \"33ea9c7a-7eac-4aab-bb2e-ae31955a84b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" Feb 26 09:48:45 crc kubenswrapper[4760]: I0226 09:48:45.229007 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/33ea9c7a-7eac-4aab-bb2e-ae31955a84b5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vk6cw\" (UID: \"33ea9c7a-7eac-4aab-bb2e-ae31955a84b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" Feb 26 09:48:45 crc kubenswrapper[4760]: I0226 09:48:45.229030 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2crxs\" (UniqueName: \"kubernetes.io/projected/33ea9c7a-7eac-4aab-bb2e-ae31955a84b5-kube-api-access-2crxs\") pod \"image-registry-66df7c8f76-vk6cw\" (UID: \"33ea9c7a-7eac-4aab-bb2e-ae31955a84b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" Feb 26 09:48:45 crc kubenswrapper[4760]: I0226 09:48:45.230557 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33ea9c7a-7eac-4aab-bb2e-ae31955a84b5-trusted-ca\") pod \"image-registry-66df7c8f76-vk6cw\" (UID: \"33ea9c7a-7eac-4aab-bb2e-ae31955a84b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" Feb 26 09:48:45 crc kubenswrapper[4760]: I0226 09:48:45.230897 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/33ea9c7a-7eac-4aab-bb2e-ae31955a84b5-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vk6cw\" (UID: \"33ea9c7a-7eac-4aab-bb2e-ae31955a84b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" Feb 26 09:48:45 crc kubenswrapper[4760]: I0226 09:48:45.232738 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/33ea9c7a-7eac-4aab-bb2e-ae31955a84b5-registry-certificates\") pod \"image-registry-66df7c8f76-vk6cw\" (UID: \"33ea9c7a-7eac-4aab-bb2e-ae31955a84b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" Feb 26 09:48:45 crc kubenswrapper[4760]: I0226 09:48:45.238616 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/33ea9c7a-7eac-4aab-bb2e-ae31955a84b5-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vk6cw\" (UID: \"33ea9c7a-7eac-4aab-bb2e-ae31955a84b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" Feb 26 09:48:45 crc kubenswrapper[4760]: I0226 09:48:45.238815 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/33ea9c7a-7eac-4aab-bb2e-ae31955a84b5-registry-tls\") pod \"image-registry-66df7c8f76-vk6cw\" (UID: \"33ea9c7a-7eac-4aab-bb2e-ae31955a84b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" Feb 26 09:48:45 crc kubenswrapper[4760]: I0226 09:48:45.247239 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2crxs\" (UniqueName: \"kubernetes.io/projected/33ea9c7a-7eac-4aab-bb2e-ae31955a84b5-kube-api-access-2crxs\") pod \"image-registry-66df7c8f76-vk6cw\" (UID: \"33ea9c7a-7eac-4aab-bb2e-ae31955a84b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" Feb 26 09:48:45 crc kubenswrapper[4760]: I0226 09:48:45.256196 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/33ea9c7a-7eac-4aab-bb2e-ae31955a84b5-bound-sa-token\") pod \"image-registry-66df7c8f76-vk6cw\" (UID: \"33ea9c7a-7eac-4aab-bb2e-ae31955a84b5\") " pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" Feb 26 09:48:45 crc kubenswrapper[4760]: I0226 09:48:45.258168 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" Feb 26 09:48:45 crc kubenswrapper[4760]: I0226 09:48:45.687983 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vk6cw"] Feb 26 09:48:46 crc kubenswrapper[4760]: I0226 09:48:46.557067 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" event={"ID":"33ea9c7a-7eac-4aab-bb2e-ae31955a84b5","Type":"ContainerStarted","Data":"33fadc0c67099cf5a8b6a9855f3d0f9588a3a7c6177851b867304a51a7486aff"} Feb 26 09:48:46 crc kubenswrapper[4760]: I0226 09:48:46.557416 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" Feb 26 09:48:46 crc kubenswrapper[4760]: I0226 09:48:46.557439 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" event={"ID":"33ea9c7a-7eac-4aab-bb2e-ae31955a84b5","Type":"ContainerStarted","Data":"0eed98ac9d5ad3d63a11fde20d3970e035db1c4388baddf063e17365f6b05c63"} Feb 26 09:48:46 crc kubenswrapper[4760]: I0226 09:48:46.579234 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" podStartSLOduration=2.579205078 podStartE2EDuration="2.579205078s" podCreationTimestamp="2026-02-26 09:48:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:48:46.573679671 +0000 UTC m=+371.707625164" watchObservedRunningTime="2026-02-26 09:48:46.579205078 +0000 UTC m=+371.713150611" Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.447770 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-895t9"] Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.448846 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-895t9" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" containerName="registry-server" containerID="cri-o://acdd23263c194f3dc8c6283723c90fc0d4b9459bd0ad7924fce5f189d90547d7" gracePeriod=30 Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.459881 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g8gj5"] Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.460441 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g8gj5" podUID="bedbd455-baad-4b56-86b7-1d851407744b" containerName="registry-server" containerID="cri-o://875ff60f9acc61f8dce800086d928c545dead9d0d9a29cb51f40a4b87cd77929" gracePeriod=30 Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.468820 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hvl2n"] Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.469167 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hvl2n" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" containerName="registry-server" containerID="cri-o://aee3c8e6f5fa71fe09f40c626762322dcade2b7b13f43b3d7035ef081c7fb530" gracePeriod=30 Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.500306 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j58zh"] Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.505459 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j58zh" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" containerName="registry-server" containerID="cri-o://879ad8fd6745180635bf88238167f0b63f041cd0ac0b643383d0700b82b41a90" gracePeriod=30 Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.509502 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dlxqc"] Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.509937 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-dlxqc" podUID="0726f0c9-0bc5-42b5-bb78-af77ad91ecbb" containerName="marketplace-operator" containerID="cri-o://e4c81f3aebfb86a7e1ec7ab276758555adee92a19798e1dc831f7aa19b1d4b17" gracePeriod=30 Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.538559 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wz6v"] Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.538969 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5wz6v" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" containerName="registry-server" containerID="cri-o://e9a9b0f56d9653740f4a8a4a96af6a48592b959201b390bcfeb97d674a3a5748" gracePeriod=30 Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.550666 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x5m47"] Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.551694 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-x5m47" Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.574182 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pzmc2"] Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.574497 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pzmc2" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" containerName="registry-server" containerID="cri-o://264287c3e005a83717e66b131242ac9576d3fb565df6ac652b013ac2ea5af2dd" gracePeriod=30 Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.582832 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x5m47"] Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.586434 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jmvz4"] Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.586777 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jmvz4" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" containerName="registry-server" containerID="cri-o://697aee0be4ddbe516c9cbce184bf48d955117c379230ffb50c8841dc2612d4dc" gracePeriod=30 Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.597251 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zzjzl"] Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.597535 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zzjzl" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" containerName="registry-server" containerID="cri-o://9a50be6219ea226f2f109703705f10e2658a981a41b2011f3137efea67583316" gracePeriod=30 Feb 26 09:48:49 crc kubenswrapper[4760]: E0226 09:48:49.652228 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e9a9b0f56d9653740f4a8a4a96af6a48592b959201b390bcfeb97d674a3a5748 is running failed: container process not found" containerID="e9a9b0f56d9653740f4a8a4a96af6a48592b959201b390bcfeb97d674a3a5748" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 09:48:49 crc kubenswrapper[4760]: E0226 09:48:49.652529 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e9a9b0f56d9653740f4a8a4a96af6a48592b959201b390bcfeb97d674a3a5748 is running failed: container process not found" containerID="e9a9b0f56d9653740f4a8a4a96af6a48592b959201b390bcfeb97d674a3a5748" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 09:48:49 crc kubenswrapper[4760]: E0226 09:48:49.653123 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e9a9b0f56d9653740f4a8a4a96af6a48592b959201b390bcfeb97d674a3a5748 is running failed: container process not found" containerID="e9a9b0f56d9653740f4a8a4a96af6a48592b959201b390bcfeb97d674a3a5748" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 09:48:49 crc kubenswrapper[4760]: E0226 09:48:49.653188 4760 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e9a9b0f56d9653740f4a8a4a96af6a48592b959201b390bcfeb97d674a3a5748 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-5wz6v" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" containerName="registry-server" Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.732967 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4c6cbd20-61bb-4ded-a190-d688c267849f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-x5m47\" (UID: \"4c6cbd20-61bb-4ded-a190-d688c267849f\") " pod="openshift-marketplace/marketplace-operator-79b997595-x5m47" Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.733077 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2shdk\" (UniqueName: \"kubernetes.io/projected/4c6cbd20-61bb-4ded-a190-d688c267849f-kube-api-access-2shdk\") pod \"marketplace-operator-79b997595-x5m47\" (UID: \"4c6cbd20-61bb-4ded-a190-d688c267849f\") " pod="openshift-marketplace/marketplace-operator-79b997595-x5m47" Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.733111 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c6cbd20-61bb-4ded-a190-d688c267849f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-x5m47\" (UID: \"4c6cbd20-61bb-4ded-a190-d688c267849f\") " pod="openshift-marketplace/marketplace-operator-79b997595-x5m47" Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.834343 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c6cbd20-61bb-4ded-a190-d688c267849f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-x5m47\" (UID: \"4c6cbd20-61bb-4ded-a190-d688c267849f\") " pod="openshift-marketplace/marketplace-operator-79b997595-x5m47" Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.834416 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4c6cbd20-61bb-4ded-a190-d688c267849f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-x5m47\" (UID: \"4c6cbd20-61bb-4ded-a190-d688c267849f\") " pod="openshift-marketplace/marketplace-operator-79b997595-x5m47" Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.834473 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2shdk\" (UniqueName: \"kubernetes.io/projected/4c6cbd20-61bb-4ded-a190-d688c267849f-kube-api-access-2shdk\") pod \"marketplace-operator-79b997595-x5m47\" (UID: \"4c6cbd20-61bb-4ded-a190-d688c267849f\") " pod="openshift-marketplace/marketplace-operator-79b997595-x5m47" Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.844711 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c6cbd20-61bb-4ded-a190-d688c267849f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-x5m47\" (UID: \"4c6cbd20-61bb-4ded-a190-d688c267849f\") " pod="openshift-marketplace/marketplace-operator-79b997595-x5m47" Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.846089 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4c6cbd20-61bb-4ded-a190-d688c267849f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-x5m47\" (UID: \"4c6cbd20-61bb-4ded-a190-d688c267849f\") " pod="openshift-marketplace/marketplace-operator-79b997595-x5m47" Feb 26 09:48:49 crc kubenswrapper[4760]: I0226 09:48:49.852896 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2shdk\" (UniqueName: \"kubernetes.io/projected/4c6cbd20-61bb-4ded-a190-d688c267849f-kube-api-access-2shdk\") pod \"marketplace-operator-79b997595-x5m47\" (UID: \"4c6cbd20-61bb-4ded-a190-d688c267849f\") " pod="openshift-marketplace/marketplace-operator-79b997595-x5m47" Feb 26 09:48:50 crc kubenswrapper[4760]: E0226 09:48:50.080787 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 264287c3e005a83717e66b131242ac9576d3fb565df6ac652b013ac2ea5af2dd is running failed: container process not found" containerID="264287c3e005a83717e66b131242ac9576d3fb565df6ac652b013ac2ea5af2dd" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 09:48:50 crc kubenswrapper[4760]: E0226 09:48:50.083459 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 264287c3e005a83717e66b131242ac9576d3fb565df6ac652b013ac2ea5af2dd is running failed: container process not found" containerID="264287c3e005a83717e66b131242ac9576d3fb565df6ac652b013ac2ea5af2dd" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 09:48:50 crc kubenswrapper[4760]: E0226 09:48:50.088625 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 264287c3e005a83717e66b131242ac9576d3fb565df6ac652b013ac2ea5af2dd is running failed: container process not found" containerID="264287c3e005a83717e66b131242ac9576d3fb565df6ac652b013ac2ea5af2dd" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 09:48:50 crc kubenswrapper[4760]: E0226 09:48:50.088687 4760 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 264287c3e005a83717e66b131242ac9576d3fb565df6ac652b013ac2ea5af2dd is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-pzmc2" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" containerName="registry-server" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.164224 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-x5m47" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.197994 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g8gj5" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.230214 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pzmc2" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.231656 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wz6v" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.237961 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvl2n" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.242071 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dlxqc" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.278310 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-895t9" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.298339 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zzjzl" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.317719 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmvz4" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.344061 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpqmx\" (UniqueName: \"kubernetes.io/projected/5b918bed-a785-4a4d-a784-0860bdbadadf-kube-api-access-cpqmx\") pod \"5b918bed-a785-4a4d-a784-0860bdbadadf\" (UID: \"5b918bed-a785-4a4d-a784-0860bdbadadf\") " Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.344111 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e32cadf-ce42-42fd-85de-7cfd1fd43dea-catalog-content\") pod \"1e32cadf-ce42-42fd-85de-7cfd1fd43dea\" (UID: \"1e32cadf-ce42-42fd-85de-7cfd1fd43dea\") " Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.344150 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzw9k\" (UniqueName: \"kubernetes.io/projected/0726f0c9-0bc5-42b5-bb78-af77ad91ecbb-kube-api-access-lzw9k\") pod \"0726f0c9-0bc5-42b5-bb78-af77ad91ecbb\" (UID: \"0726f0c9-0bc5-42b5-bb78-af77ad91ecbb\") " Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.344182 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prsb4\" (UniqueName: \"kubernetes.io/projected/bedbd455-baad-4b56-86b7-1d851407744b-kube-api-access-prsb4\") pod \"bedbd455-baad-4b56-86b7-1d851407744b\" (UID: \"bedbd455-baad-4b56-86b7-1d851407744b\") " Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.344208 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7427c503-5c81-488e-b0f0-61b2537a96a4-utilities\") pod \"7427c503-5c81-488e-b0f0-61b2537a96a4\" (UID: \"7427c503-5c81-488e-b0f0-61b2537a96a4\") " Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.344239 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b918bed-a785-4a4d-a784-0860bdbadadf-utilities\") pod \"5b918bed-a785-4a4d-a784-0860bdbadadf\" (UID: \"5b918bed-a785-4a4d-a784-0860bdbadadf\") " Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.344281 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxlpk\" (UniqueName: \"kubernetes.io/projected/1e32cadf-ce42-42fd-85de-7cfd1fd43dea-kube-api-access-jxlpk\") pod \"1e32cadf-ce42-42fd-85de-7cfd1fd43dea\" (UID: \"1e32cadf-ce42-42fd-85de-7cfd1fd43dea\") " Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.344313 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e32cadf-ce42-42fd-85de-7cfd1fd43dea-utilities\") pod \"1e32cadf-ce42-42fd-85de-7cfd1fd43dea\" (UID: \"1e32cadf-ce42-42fd-85de-7cfd1fd43dea\") " Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.344347 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b918bed-a785-4a4d-a784-0860bdbadadf-catalog-content\") pod \"5b918bed-a785-4a4d-a784-0860bdbadadf\" (UID: \"5b918bed-a785-4a4d-a784-0860bdbadadf\") " Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.344378 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bedbd455-baad-4b56-86b7-1d851407744b-catalog-content\") pod \"bedbd455-baad-4b56-86b7-1d851407744b\" (UID: \"bedbd455-baad-4b56-86b7-1d851407744b\") " Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.344402 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7427c503-5c81-488e-b0f0-61b2537a96a4-catalog-content\") pod \"7427c503-5c81-488e-b0f0-61b2537a96a4\" (UID: \"7427c503-5c81-488e-b0f0-61b2537a96a4\") " Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.344431 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0726f0c9-0bc5-42b5-bb78-af77ad91ecbb-marketplace-trusted-ca\") pod \"0726f0c9-0bc5-42b5-bb78-af77ad91ecbb\" (UID: \"0726f0c9-0bc5-42b5-bb78-af77ad91ecbb\") " Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.344460 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbbdv\" (UniqueName: \"kubernetes.io/projected/7427c503-5c81-488e-b0f0-61b2537a96a4-kube-api-access-tbbdv\") pod \"7427c503-5c81-488e-b0f0-61b2537a96a4\" (UID: \"7427c503-5c81-488e-b0f0-61b2537a96a4\") " Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.344494 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bedbd455-baad-4b56-86b7-1d851407744b-utilities\") pod \"bedbd455-baad-4b56-86b7-1d851407744b\" (UID: \"bedbd455-baad-4b56-86b7-1d851407744b\") " Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.344530 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0726f0c9-0bc5-42b5-bb78-af77ad91ecbb-marketplace-operator-metrics\") pod \"0726f0c9-0bc5-42b5-bb78-af77ad91ecbb\" (UID: \"0726f0c9-0bc5-42b5-bb78-af77ad91ecbb\") " Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.346064 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e32cadf-ce42-42fd-85de-7cfd1fd43dea-utilities" (OuterVolumeSpecName: "utilities") pod "1e32cadf-ce42-42fd-85de-7cfd1fd43dea" (UID: "1e32cadf-ce42-42fd-85de-7cfd1fd43dea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.346436 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7427c503-5c81-488e-b0f0-61b2537a96a4-utilities" (OuterVolumeSpecName: "utilities") pod "7427c503-5c81-488e-b0f0-61b2537a96a4" (UID: "7427c503-5c81-488e-b0f0-61b2537a96a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.346685 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b918bed-a785-4a4d-a784-0860bdbadadf-utilities" (OuterVolumeSpecName: "utilities") pod "5b918bed-a785-4a4d-a784-0860bdbadadf" (UID: "5b918bed-a785-4a4d-a784-0860bdbadadf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.349628 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0726f0c9-0bc5-42b5-bb78-af77ad91ecbb-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "0726f0c9-0bc5-42b5-bb78-af77ad91ecbb" (UID: "0726f0c9-0bc5-42b5-bb78-af77ad91ecbb"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.348945 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bedbd455-baad-4b56-86b7-1d851407744b-utilities" (OuterVolumeSpecName: "utilities") pod "bedbd455-baad-4b56-86b7-1d851407744b" (UID: "bedbd455-baad-4b56-86b7-1d851407744b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.352882 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7427c503-5c81-488e-b0f0-61b2537a96a4-kube-api-access-tbbdv" (OuterVolumeSpecName: "kube-api-access-tbbdv") pod "7427c503-5c81-488e-b0f0-61b2537a96a4" (UID: "7427c503-5c81-488e-b0f0-61b2537a96a4"). InnerVolumeSpecName "kube-api-access-tbbdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.353106 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e32cadf-ce42-42fd-85de-7cfd1fd43dea-kube-api-access-jxlpk" (OuterVolumeSpecName: "kube-api-access-jxlpk") pod "1e32cadf-ce42-42fd-85de-7cfd1fd43dea" (UID: "1e32cadf-ce42-42fd-85de-7cfd1fd43dea"). InnerVolumeSpecName "kube-api-access-jxlpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.353372 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bedbd455-baad-4b56-86b7-1d851407744b-kube-api-access-prsb4" (OuterVolumeSpecName: "kube-api-access-prsb4") pod "bedbd455-baad-4b56-86b7-1d851407744b" (UID: "bedbd455-baad-4b56-86b7-1d851407744b"). InnerVolumeSpecName "kube-api-access-prsb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.353488 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0726f0c9-0bc5-42b5-bb78-af77ad91ecbb-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "0726f0c9-0bc5-42b5-bb78-af77ad91ecbb" (UID: "0726f0c9-0bc5-42b5-bb78-af77ad91ecbb"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.353552 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b918bed-a785-4a4d-a784-0860bdbadadf-kube-api-access-cpqmx" (OuterVolumeSpecName: "kube-api-access-cpqmx") pod "5b918bed-a785-4a4d-a784-0860bdbadadf" (UID: "5b918bed-a785-4a4d-a784-0860bdbadadf"). InnerVolumeSpecName "kube-api-access-cpqmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.353737 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0726f0c9-0bc5-42b5-bb78-af77ad91ecbb-kube-api-access-lzw9k" (OuterVolumeSpecName: "kube-api-access-lzw9k") pod "0726f0c9-0bc5-42b5-bb78-af77ad91ecbb" (UID: "0726f0c9-0bc5-42b5-bb78-af77ad91ecbb"). InnerVolumeSpecName "kube-api-access-lzw9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.375491 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e32cadf-ce42-42fd-85de-7cfd1fd43dea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e32cadf-ce42-42fd-85de-7cfd1fd43dea" (UID: "1e32cadf-ce42-42fd-85de-7cfd1fd43dea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.396756 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b918bed-a785-4a4d-a784-0860bdbadadf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b918bed-a785-4a4d-a784-0860bdbadadf" (UID: "5b918bed-a785-4a4d-a784-0860bdbadadf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.419726 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7427c503-5c81-488e-b0f0-61b2537a96a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7427c503-5c81-488e-b0f0-61b2537a96a4" (UID: "7427c503-5c81-488e-b0f0-61b2537a96a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.419771 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bedbd455-baad-4b56-86b7-1d851407744b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bedbd455-baad-4b56-86b7-1d851407744b" (UID: "bedbd455-baad-4b56-86b7-1d851407744b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.445216 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/919bb2ab-9fbf-4a58-835e-8348eebaf093-utilities\") pod \"919bb2ab-9fbf-4a58-835e-8348eebaf093\" (UID: \"919bb2ab-9fbf-4a58-835e-8348eebaf093\") " Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.445305 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e598e10-dd81-4dce-ad36-a44df83ae7fd-catalog-content\") pod \"3e598e10-dd81-4dce-ad36-a44df83ae7fd\" (UID: \"3e598e10-dd81-4dce-ad36-a44df83ae7fd\") " Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.445346 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5ngq\" (UniqueName: \"kubernetes.io/projected/919bb2ab-9fbf-4a58-835e-8348eebaf093-kube-api-access-g5ngq\") pod \"919bb2ab-9fbf-4a58-835e-8348eebaf093\" (UID: \"919bb2ab-9fbf-4a58-835e-8348eebaf093\") " Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.445434 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lc9sx\" (UniqueName: \"kubernetes.io/projected/6ee6a724-49ab-489e-84b5-cc2f96c89dc2-kube-api-access-lc9sx\") pod \"6ee6a724-49ab-489e-84b5-cc2f96c89dc2\" (UID: \"6ee6a724-49ab-489e-84b5-cc2f96c89dc2\") " Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.445474 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e598e10-dd81-4dce-ad36-a44df83ae7fd-utilities\") pod \"3e598e10-dd81-4dce-ad36-a44df83ae7fd\" (UID: \"3e598e10-dd81-4dce-ad36-a44df83ae7fd\") " Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.445493 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vxwh\" (UniqueName: \"kubernetes.io/projected/3e598e10-dd81-4dce-ad36-a44df83ae7fd-kube-api-access-6vxwh\") pod \"3e598e10-dd81-4dce-ad36-a44df83ae7fd\" (UID: \"3e598e10-dd81-4dce-ad36-a44df83ae7fd\") " Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.445530 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/919bb2ab-9fbf-4a58-835e-8348eebaf093-catalog-content\") pod \"919bb2ab-9fbf-4a58-835e-8348eebaf093\" (UID: \"919bb2ab-9fbf-4a58-835e-8348eebaf093\") " Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.445549 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ee6a724-49ab-489e-84b5-cc2f96c89dc2-catalog-content\") pod \"6ee6a724-49ab-489e-84b5-cc2f96c89dc2\" (UID: \"6ee6a724-49ab-489e-84b5-cc2f96c89dc2\") " Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.445614 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ee6a724-49ab-489e-84b5-cc2f96c89dc2-utilities\") pod \"6ee6a724-49ab-489e-84b5-cc2f96c89dc2\" (UID: \"6ee6a724-49ab-489e-84b5-cc2f96c89dc2\") " Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.445912 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e32cadf-ce42-42fd-85de-7cfd1fd43dea-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.445928 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prsb4\" (UniqueName: \"kubernetes.io/projected/bedbd455-baad-4b56-86b7-1d851407744b-kube-api-access-prsb4\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.445942 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzw9k\" (UniqueName: \"kubernetes.io/projected/0726f0c9-0bc5-42b5-bb78-af77ad91ecbb-kube-api-access-lzw9k\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.445956 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7427c503-5c81-488e-b0f0-61b2537a96a4-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.445968 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b918bed-a785-4a4d-a784-0860bdbadadf-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.445979 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxlpk\" (UniqueName: \"kubernetes.io/projected/1e32cadf-ce42-42fd-85de-7cfd1fd43dea-kube-api-access-jxlpk\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.445992 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e32cadf-ce42-42fd-85de-7cfd1fd43dea-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.446003 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b918bed-a785-4a4d-a784-0860bdbadadf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.446013 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bedbd455-baad-4b56-86b7-1d851407744b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.446024 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7427c503-5c81-488e-b0f0-61b2537a96a4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.446036 4760 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0726f0c9-0bc5-42b5-bb78-af77ad91ecbb-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.446049 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbbdv\" (UniqueName: \"kubernetes.io/projected/7427c503-5c81-488e-b0f0-61b2537a96a4-kube-api-access-tbbdv\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.446061 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bedbd455-baad-4b56-86b7-1d851407744b-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.446073 4760 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0726f0c9-0bc5-42b5-bb78-af77ad91ecbb-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.446086 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpqmx\" (UniqueName: \"kubernetes.io/projected/5b918bed-a785-4a4d-a784-0860bdbadadf-kube-api-access-cpqmx\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.446360 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e598e10-dd81-4dce-ad36-a44df83ae7fd-utilities" (OuterVolumeSpecName: "utilities") pod "3e598e10-dd81-4dce-ad36-a44df83ae7fd" (UID: "3e598e10-dd81-4dce-ad36-a44df83ae7fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.446399 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/919bb2ab-9fbf-4a58-835e-8348eebaf093-utilities" (OuterVolumeSpecName: "utilities") pod "919bb2ab-9fbf-4a58-835e-8348eebaf093" (UID: "919bb2ab-9fbf-4a58-835e-8348eebaf093"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.446934 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ee6a724-49ab-489e-84b5-cc2f96c89dc2-utilities" (OuterVolumeSpecName: "utilities") pod "6ee6a724-49ab-489e-84b5-cc2f96c89dc2" (UID: "6ee6a724-49ab-489e-84b5-cc2f96c89dc2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.453751 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee6a724-49ab-489e-84b5-cc2f96c89dc2-kube-api-access-lc9sx" (OuterVolumeSpecName: "kube-api-access-lc9sx") pod "6ee6a724-49ab-489e-84b5-cc2f96c89dc2" (UID: "6ee6a724-49ab-489e-84b5-cc2f96c89dc2"). InnerVolumeSpecName "kube-api-access-lc9sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.453910 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/919bb2ab-9fbf-4a58-835e-8348eebaf093-kube-api-access-g5ngq" (OuterVolumeSpecName: "kube-api-access-g5ngq") pod "919bb2ab-9fbf-4a58-835e-8348eebaf093" (UID: "919bb2ab-9fbf-4a58-835e-8348eebaf093"). InnerVolumeSpecName "kube-api-access-g5ngq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.456715 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e598e10-dd81-4dce-ad36-a44df83ae7fd-kube-api-access-6vxwh" (OuterVolumeSpecName: "kube-api-access-6vxwh") pod "3e598e10-dd81-4dce-ad36-a44df83ae7fd" (UID: "3e598e10-dd81-4dce-ad36-a44df83ae7fd"). InnerVolumeSpecName "kube-api-access-6vxwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.513533 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/919bb2ab-9fbf-4a58-835e-8348eebaf093-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "919bb2ab-9fbf-4a58-835e-8348eebaf093" (UID: "919bb2ab-9fbf-4a58-835e-8348eebaf093"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.546871 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ee6a724-49ab-489e-84b5-cc2f96c89dc2-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.546911 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/919bb2ab-9fbf-4a58-835e-8348eebaf093-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.546922 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5ngq\" (UniqueName: \"kubernetes.io/projected/919bb2ab-9fbf-4a58-835e-8348eebaf093-kube-api-access-g5ngq\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.546933 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lc9sx\" (UniqueName: \"kubernetes.io/projected/6ee6a724-49ab-489e-84b5-cc2f96c89dc2-kube-api-access-lc9sx\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.546943 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e598e10-dd81-4dce-ad36-a44df83ae7fd-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.546952 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vxwh\" (UniqueName: \"kubernetes.io/projected/3e598e10-dd81-4dce-ad36-a44df83ae7fd-kube-api-access-6vxwh\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.546960 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/919bb2ab-9fbf-4a58-835e-8348eebaf093-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.573257 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j58zh" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.587857 4760 generic.go:334] "Generic (PLEG): container finished" podID="bedbd455-baad-4b56-86b7-1d851407744b" containerID="875ff60f9acc61f8dce800086d928c545dead9d0d9a29cb51f40a4b87cd77929" exitCode=0 Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.587961 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g8gj5" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.599090 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e598e10-dd81-4dce-ad36-a44df83ae7fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e598e10-dd81-4dce-ad36-a44df83ae7fd" (UID: "3e598e10-dd81-4dce-ad36-a44df83ae7fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.602953 4760 generic.go:334] "Generic (PLEG): container finished" podID="5b918bed-a785-4a4d-a784-0860bdbadadf" containerID="e9a9b0f56d9653740f4a8a4a96af6a48592b959201b390bcfeb97d674a3a5748" exitCode=0 Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.603184 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5wz6v" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.615141 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ee6a724-49ab-489e-84b5-cc2f96c89dc2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ee6a724-49ab-489e-84b5-cc2f96c89dc2" (UID: "6ee6a724-49ab-489e-84b5-cc2f96c89dc2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.617999 4760 generic.go:334] "Generic (PLEG): container finished" podID="d5f41609-3893-4649-be8b-2a3c839f082a" containerID="879ad8fd6745180635bf88238167f0b63f041cd0ac0b643383d0700b82b41a90" exitCode=0 Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.618200 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j58zh" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.628162 4760 generic.go:334] "Generic (PLEG): container finished" podID="7427c503-5c81-488e-b0f0-61b2537a96a4" containerID="aee3c8e6f5fa71fe09f40c626762322dcade2b7b13f43b3d7035ef081c7fb530" exitCode=0 Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.628472 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hvl2n" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.632559 4760 generic.go:334] "Generic (PLEG): container finished" podID="0726f0c9-0bc5-42b5-bb78-af77ad91ecbb" containerID="e4c81f3aebfb86a7e1ec7ab276758555adee92a19798e1dc831f7aa19b1d4b17" exitCode=0 Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.632923 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dlxqc" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.640859 4760 generic.go:334] "Generic (PLEG): container finished" podID="919bb2ab-9fbf-4a58-835e-8348eebaf093" containerID="acdd23263c194f3dc8c6283723c90fc0d4b9459bd0ad7924fce5f189d90547d7" exitCode=0 Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.640959 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-895t9" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.646280 4760 generic.go:334] "Generic (PLEG): container finished" podID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" containerID="697aee0be4ddbe516c9cbce184bf48d955117c379230ffb50c8841dc2612d4dc" exitCode=0 Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.646399 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmvz4" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.647948 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ee6a724-49ab-489e-84b5-cc2f96c89dc2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.647974 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e598e10-dd81-4dce-ad36-a44df83ae7fd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.649862 4760 generic.go:334] "Generic (PLEG): container finished" podID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" containerID="9a50be6219ea226f2f109703705f10e2658a981a41b2011f3137efea67583316" exitCode=0 Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.650037 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zzjzl" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.664469 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8gj5" event={"ID":"bedbd455-baad-4b56-86b7-1d851407744b","Type":"ContainerDied","Data":"875ff60f9acc61f8dce800086d928c545dead9d0d9a29cb51f40a4b87cd77929"} Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.664556 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g8gj5" event={"ID":"bedbd455-baad-4b56-86b7-1d851407744b","Type":"ContainerDied","Data":"3e7e6b4855be5f06d26fcdf37fdad3eed92f2c82ee81e8546c3eef249789fda6"} Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.664591 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wz6v" event={"ID":"5b918bed-a785-4a4d-a784-0860bdbadadf","Type":"ContainerDied","Data":"e9a9b0f56d9653740f4a8a4a96af6a48592b959201b390bcfeb97d674a3a5748"} Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.664614 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5wz6v" event={"ID":"5b918bed-a785-4a4d-a784-0860bdbadadf","Type":"ContainerDied","Data":"911f2d553aeeaaed3500e0724d05f580d36d04f542b7bc767b73d68152d1b053"} Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.664757 4760 generic.go:334] "Generic (PLEG): container finished" podID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" containerID="264287c3e005a83717e66b131242ac9576d3fb565df6ac652b013ac2ea5af2dd" exitCode=0 Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.664777 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j58zh" event={"ID":"d5f41609-3893-4649-be8b-2a3c839f082a","Type":"ContainerDied","Data":"879ad8fd6745180635bf88238167f0b63f041cd0ac0b643383d0700b82b41a90"} Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.664853 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j58zh" event={"ID":"d5f41609-3893-4649-be8b-2a3c839f082a","Type":"ContainerDied","Data":"6908796f6ae8e41cf4f193efa49c7aeb824d1c5d4e37f4b9dddf6374ffbb8aa6"} Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.664900 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvl2n" event={"ID":"7427c503-5c81-488e-b0f0-61b2537a96a4","Type":"ContainerDied","Data":"aee3c8e6f5fa71fe09f40c626762322dcade2b7b13f43b3d7035ef081c7fb530"} Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.664913 4760 scope.go:117] "RemoveContainer" containerID="875ff60f9acc61f8dce800086d928c545dead9d0d9a29cb51f40a4b87cd77929" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.664918 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hvl2n" event={"ID":"7427c503-5c81-488e-b0f0-61b2537a96a4","Type":"ContainerDied","Data":"f0e5acfb741b2a7ef0520c3c9c95efb62515d71182903a38a6406846ccf3b781"} Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.665154 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dlxqc" event={"ID":"0726f0c9-0bc5-42b5-bb78-af77ad91ecbb","Type":"ContainerDied","Data":"e4c81f3aebfb86a7e1ec7ab276758555adee92a19798e1dc831f7aa19b1d4b17"} Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.665214 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dlxqc" event={"ID":"0726f0c9-0bc5-42b5-bb78-af77ad91ecbb","Type":"ContainerDied","Data":"805a2544d893684a1e6ffdb388e21d3e2f89012aa91b3c70903d6b3f57ce8bfc"} Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.665219 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pzmc2" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.665231 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-895t9" event={"ID":"919bb2ab-9fbf-4a58-835e-8348eebaf093","Type":"ContainerDied","Data":"acdd23263c194f3dc8c6283723c90fc0d4b9459bd0ad7924fce5f189d90547d7"} Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.665272 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-895t9" event={"ID":"919bb2ab-9fbf-4a58-835e-8348eebaf093","Type":"ContainerDied","Data":"18be98eaafabddba08432b6b77b097ee17d401243004a9df8c0d005060113b2c"} Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.665289 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmvz4" event={"ID":"6ee6a724-49ab-489e-84b5-cc2f96c89dc2","Type":"ContainerDied","Data":"697aee0be4ddbe516c9cbce184bf48d955117c379230ffb50c8841dc2612d4dc"} Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.665328 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmvz4" event={"ID":"6ee6a724-49ab-489e-84b5-cc2f96c89dc2","Type":"ContainerDied","Data":"fe9343022b5bfeaf4acafbf9c346d04ff74833038c35d5410666f6aced092770"} Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.665364 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzjzl" event={"ID":"3e598e10-dd81-4dce-ad36-a44df83ae7fd","Type":"ContainerDied","Data":"9a50be6219ea226f2f109703705f10e2658a981a41b2011f3137efea67583316"} Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.665381 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzjzl" event={"ID":"3e598e10-dd81-4dce-ad36-a44df83ae7fd","Type":"ContainerDied","Data":"82fcf7e9cad6dde7d719fb70cbb22f18f10719c4d989770540f28dc30a32c654"} Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.665395 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pzmc2" event={"ID":"1e32cadf-ce42-42fd-85de-7cfd1fd43dea","Type":"ContainerDied","Data":"264287c3e005a83717e66b131242ac9576d3fb565df6ac652b013ac2ea5af2dd"} Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.665433 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pzmc2" event={"ID":"1e32cadf-ce42-42fd-85de-7cfd1fd43dea","Type":"ContainerDied","Data":"e4940b2b67a9e7c14602ac63c403c2c34bf00ad5fc54068ff93746e5df20af71"} Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.709031 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g8gj5"] Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.722877 4760 scope.go:117] "RemoveContainer" containerID="425f0688ad8952585dd4a6730151512305b97af05184485f417b87b406b590aa" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.741398 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g8gj5"] Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.751843 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5f41609-3893-4649-be8b-2a3c839f082a-catalog-content\") pod \"d5f41609-3893-4649-be8b-2a3c839f082a\" (UID: \"d5f41609-3893-4649-be8b-2a3c839f082a\") " Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.751988 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5f41609-3893-4649-be8b-2a3c839f082a-utilities\") pod \"d5f41609-3893-4649-be8b-2a3c839f082a\" (UID: \"d5f41609-3893-4649-be8b-2a3c839f082a\") " Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.752066 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlfdt\" (UniqueName: \"kubernetes.io/projected/d5f41609-3893-4649-be8b-2a3c839f082a-kube-api-access-zlfdt\") pod \"d5f41609-3893-4649-be8b-2a3c839f082a\" (UID: \"d5f41609-3893-4649-be8b-2a3c839f082a\") " Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.753047 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5f41609-3893-4649-be8b-2a3c839f082a-utilities" (OuterVolumeSpecName: "utilities") pod "d5f41609-3893-4649-be8b-2a3c839f082a" (UID: "d5f41609-3893-4649-be8b-2a3c839f082a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.763158 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wz6v"] Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.768684 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5wz6v"] Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.774338 4760 scope.go:117] "RemoveContainer" containerID="69622ef6ea2223c5de40f550fa7c533585273a987663aee91b9c2fdee1f4a9dd" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.775670 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hvl2n"] Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.777235 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5f41609-3893-4649-be8b-2a3c839f082a-kube-api-access-zlfdt" (OuterVolumeSpecName: "kube-api-access-zlfdt") pod "d5f41609-3893-4649-be8b-2a3c839f082a" (UID: "d5f41609-3893-4649-be8b-2a3c839f082a"). InnerVolumeSpecName "kube-api-access-zlfdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.782863 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x5m47"] Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.791774 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hvl2n"] Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.809544 4760 scope.go:117] "RemoveContainer" containerID="875ff60f9acc61f8dce800086d928c545dead9d0d9a29cb51f40a4b87cd77929" Feb 26 09:48:50 crc kubenswrapper[4760]: E0226 09:48:50.813002 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"875ff60f9acc61f8dce800086d928c545dead9d0d9a29cb51f40a4b87cd77929\": container with ID starting with 875ff60f9acc61f8dce800086d928c545dead9d0d9a29cb51f40a4b87cd77929 not found: ID does not exist" containerID="875ff60f9acc61f8dce800086d928c545dead9d0d9a29cb51f40a4b87cd77929" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.813137 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"875ff60f9acc61f8dce800086d928c545dead9d0d9a29cb51f40a4b87cd77929"} err="failed to get container status \"875ff60f9acc61f8dce800086d928c545dead9d0d9a29cb51f40a4b87cd77929\": rpc error: code = NotFound desc = could not find container \"875ff60f9acc61f8dce800086d928c545dead9d0d9a29cb51f40a4b87cd77929\": container with ID starting with 875ff60f9acc61f8dce800086d928c545dead9d0d9a29cb51f40a4b87cd77929 not found: ID does not exist" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.813209 4760 scope.go:117] "RemoveContainer" containerID="425f0688ad8952585dd4a6730151512305b97af05184485f417b87b406b590aa" Feb 26 09:48:50 crc kubenswrapper[4760]: E0226 09:48:50.815385 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"425f0688ad8952585dd4a6730151512305b97af05184485f417b87b406b590aa\": container with ID starting with 425f0688ad8952585dd4a6730151512305b97af05184485f417b87b406b590aa not found: ID does not exist" containerID="425f0688ad8952585dd4a6730151512305b97af05184485f417b87b406b590aa" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.815430 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"425f0688ad8952585dd4a6730151512305b97af05184485f417b87b406b590aa"} err="failed to get container status \"425f0688ad8952585dd4a6730151512305b97af05184485f417b87b406b590aa\": rpc error: code = NotFound desc = could not find container \"425f0688ad8952585dd4a6730151512305b97af05184485f417b87b406b590aa\": container with ID starting with 425f0688ad8952585dd4a6730151512305b97af05184485f417b87b406b590aa not found: ID does not exist" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.815463 4760 scope.go:117] "RemoveContainer" containerID="69622ef6ea2223c5de40f550fa7c533585273a987663aee91b9c2fdee1f4a9dd" Feb 26 09:48:50 crc kubenswrapper[4760]: E0226 09:48:50.815971 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69622ef6ea2223c5de40f550fa7c533585273a987663aee91b9c2fdee1f4a9dd\": container with ID starting with 69622ef6ea2223c5de40f550fa7c533585273a987663aee91b9c2fdee1f4a9dd not found: ID does not exist" containerID="69622ef6ea2223c5de40f550fa7c533585273a987663aee91b9c2fdee1f4a9dd" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.816047 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69622ef6ea2223c5de40f550fa7c533585273a987663aee91b9c2fdee1f4a9dd"} err="failed to get container status \"69622ef6ea2223c5de40f550fa7c533585273a987663aee91b9c2fdee1f4a9dd\": rpc error: code = NotFound desc = could not find container \"69622ef6ea2223c5de40f550fa7c533585273a987663aee91b9c2fdee1f4a9dd\": container with ID starting with 69622ef6ea2223c5de40f550fa7c533585273a987663aee91b9c2fdee1f4a9dd not found: ID does not exist" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.816091 4760 scope.go:117] "RemoveContainer" containerID="e9a9b0f56d9653740f4a8a4a96af6a48592b959201b390bcfeb97d674a3a5748" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.817288 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zzjzl"] Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.818115 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5f41609-3893-4649-be8b-2a3c839f082a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5f41609-3893-4649-be8b-2a3c839f082a" (UID: "d5f41609-3893-4649-be8b-2a3c839f082a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.823683 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zzjzl"] Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.827101 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dlxqc"] Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.835430 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dlxqc"] Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.846631 4760 scope.go:117] "RemoveContainer" containerID="bda42931828591b1d15975ba85eda14070f7eb94619789f6e2d73301185e80a8" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.854608 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5f41609-3893-4649-be8b-2a3c839f082a-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.854667 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlfdt\" (UniqueName: \"kubernetes.io/projected/d5f41609-3893-4649-be8b-2a3c839f082a-kube-api-access-zlfdt\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.854686 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5f41609-3893-4649-be8b-2a3c839f082a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.870304 4760 scope.go:117] "RemoveContainer" containerID="f59191b18d6ebf500c3a306f7beb4c64c891aad2b4e7d80b69eb818617abb7ec" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.875580 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jmvz4"] Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.880932 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jmvz4"] Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.885405 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pzmc2"] Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.889096 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pzmc2"] Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.900104 4760 scope.go:117] "RemoveContainer" containerID="e9a9b0f56d9653740f4a8a4a96af6a48592b959201b390bcfeb97d674a3a5748" Feb 26 09:48:50 crc kubenswrapper[4760]: E0226 09:48:50.900953 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9a9b0f56d9653740f4a8a4a96af6a48592b959201b390bcfeb97d674a3a5748\": container with ID starting with e9a9b0f56d9653740f4a8a4a96af6a48592b959201b390bcfeb97d674a3a5748 not found: ID does not exist" containerID="e9a9b0f56d9653740f4a8a4a96af6a48592b959201b390bcfeb97d674a3a5748" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.902642 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-895t9"] Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.901029 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9a9b0f56d9653740f4a8a4a96af6a48592b959201b390bcfeb97d674a3a5748"} err="failed to get container status \"e9a9b0f56d9653740f4a8a4a96af6a48592b959201b390bcfeb97d674a3a5748\": rpc error: code = NotFound desc = could not find container \"e9a9b0f56d9653740f4a8a4a96af6a48592b959201b390bcfeb97d674a3a5748\": container with ID starting with e9a9b0f56d9653740f4a8a4a96af6a48592b959201b390bcfeb97d674a3a5748 not found: ID does not exist" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.903821 4760 scope.go:117] "RemoveContainer" containerID="bda42931828591b1d15975ba85eda14070f7eb94619789f6e2d73301185e80a8" Feb 26 09:48:50 crc kubenswrapper[4760]: E0226 09:48:50.904422 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bda42931828591b1d15975ba85eda14070f7eb94619789f6e2d73301185e80a8\": container with ID starting with bda42931828591b1d15975ba85eda14070f7eb94619789f6e2d73301185e80a8 not found: ID does not exist" containerID="bda42931828591b1d15975ba85eda14070f7eb94619789f6e2d73301185e80a8" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.904465 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bda42931828591b1d15975ba85eda14070f7eb94619789f6e2d73301185e80a8"} err="failed to get container status \"bda42931828591b1d15975ba85eda14070f7eb94619789f6e2d73301185e80a8\": rpc error: code = NotFound desc = could not find container \"bda42931828591b1d15975ba85eda14070f7eb94619789f6e2d73301185e80a8\": container with ID starting with bda42931828591b1d15975ba85eda14070f7eb94619789f6e2d73301185e80a8 not found: ID does not exist" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.904491 4760 scope.go:117] "RemoveContainer" containerID="f59191b18d6ebf500c3a306f7beb4c64c891aad2b4e7d80b69eb818617abb7ec" Feb 26 09:48:50 crc kubenswrapper[4760]: E0226 09:48:50.905362 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f59191b18d6ebf500c3a306f7beb4c64c891aad2b4e7d80b69eb818617abb7ec\": container with ID starting with f59191b18d6ebf500c3a306f7beb4c64c891aad2b4e7d80b69eb818617abb7ec not found: ID does not exist" containerID="f59191b18d6ebf500c3a306f7beb4c64c891aad2b4e7d80b69eb818617abb7ec" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.905398 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f59191b18d6ebf500c3a306f7beb4c64c891aad2b4e7d80b69eb818617abb7ec"} err="failed to get container status \"f59191b18d6ebf500c3a306f7beb4c64c891aad2b4e7d80b69eb818617abb7ec\": rpc error: code = NotFound desc = could not find container \"f59191b18d6ebf500c3a306f7beb4c64c891aad2b4e7d80b69eb818617abb7ec\": container with ID starting with f59191b18d6ebf500c3a306f7beb4c64c891aad2b4e7d80b69eb818617abb7ec not found: ID does not exist" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.905420 4760 scope.go:117] "RemoveContainer" containerID="879ad8fd6745180635bf88238167f0b63f041cd0ac0b643383d0700b82b41a90" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.911765 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-895t9"] Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.929090 4760 scope.go:117] "RemoveContainer" containerID="495c8cc911207d0332df8c043aa14dc03b5b751a78976a691fbee843829a6cfd" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.955810 4760 scope.go:117] "RemoveContainer" containerID="c622b33a99996df6cb9ea69cde0ed9b643076f621be8c871900828d7f74d218e" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.960345 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j58zh"] Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.964327 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j58zh"] Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.969675 4760 scope.go:117] "RemoveContainer" containerID="879ad8fd6745180635bf88238167f0b63f041cd0ac0b643383d0700b82b41a90" Feb 26 09:48:50 crc kubenswrapper[4760]: E0226 09:48:50.971191 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"879ad8fd6745180635bf88238167f0b63f041cd0ac0b643383d0700b82b41a90\": container with ID starting with 879ad8fd6745180635bf88238167f0b63f041cd0ac0b643383d0700b82b41a90 not found: ID does not exist" containerID="879ad8fd6745180635bf88238167f0b63f041cd0ac0b643383d0700b82b41a90" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.971234 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"879ad8fd6745180635bf88238167f0b63f041cd0ac0b643383d0700b82b41a90"} err="failed to get container status \"879ad8fd6745180635bf88238167f0b63f041cd0ac0b643383d0700b82b41a90\": rpc error: code = NotFound desc = could not find container \"879ad8fd6745180635bf88238167f0b63f041cd0ac0b643383d0700b82b41a90\": container with ID starting with 879ad8fd6745180635bf88238167f0b63f041cd0ac0b643383d0700b82b41a90 not found: ID does not exist" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.971267 4760 scope.go:117] "RemoveContainer" containerID="495c8cc911207d0332df8c043aa14dc03b5b751a78976a691fbee843829a6cfd" Feb 26 09:48:50 crc kubenswrapper[4760]: E0226 09:48:50.972104 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"495c8cc911207d0332df8c043aa14dc03b5b751a78976a691fbee843829a6cfd\": container with ID starting with 495c8cc911207d0332df8c043aa14dc03b5b751a78976a691fbee843829a6cfd not found: ID does not exist" containerID="495c8cc911207d0332df8c043aa14dc03b5b751a78976a691fbee843829a6cfd" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.972179 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"495c8cc911207d0332df8c043aa14dc03b5b751a78976a691fbee843829a6cfd"} err="failed to get container status \"495c8cc911207d0332df8c043aa14dc03b5b751a78976a691fbee843829a6cfd\": rpc error: code = NotFound desc = could not find container \"495c8cc911207d0332df8c043aa14dc03b5b751a78976a691fbee843829a6cfd\": container with ID starting with 495c8cc911207d0332df8c043aa14dc03b5b751a78976a691fbee843829a6cfd not found: ID does not exist" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.972231 4760 scope.go:117] "RemoveContainer" containerID="c622b33a99996df6cb9ea69cde0ed9b643076f621be8c871900828d7f74d218e" Feb 26 09:48:50 crc kubenswrapper[4760]: E0226 09:48:50.972679 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c622b33a99996df6cb9ea69cde0ed9b643076f621be8c871900828d7f74d218e\": container with ID starting with c622b33a99996df6cb9ea69cde0ed9b643076f621be8c871900828d7f74d218e not found: ID does not exist" containerID="c622b33a99996df6cb9ea69cde0ed9b643076f621be8c871900828d7f74d218e" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.972708 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c622b33a99996df6cb9ea69cde0ed9b643076f621be8c871900828d7f74d218e"} err="failed to get container status \"c622b33a99996df6cb9ea69cde0ed9b643076f621be8c871900828d7f74d218e\": rpc error: code = NotFound desc = could not find container \"c622b33a99996df6cb9ea69cde0ed9b643076f621be8c871900828d7f74d218e\": container with ID starting with c622b33a99996df6cb9ea69cde0ed9b643076f621be8c871900828d7f74d218e not found: ID does not exist" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.972731 4760 scope.go:117] "RemoveContainer" containerID="aee3c8e6f5fa71fe09f40c626762322dcade2b7b13f43b3d7035ef081c7fb530" Feb 26 09:48:50 crc kubenswrapper[4760]: I0226 09:48:50.989538 4760 scope.go:117] "RemoveContainer" containerID="9f4f9adb9ce8c755ff5c812a71ec7588da16e7e0c0b5124e85b5ca9c50b7bedc" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.014519 4760 scope.go:117] "RemoveContainer" containerID="4e142c57e4454fb7d885fb66e1ffe9c7a7b86316b63cd0bfc1b2d8067e58bdb6" Feb 26 09:48:51 crc kubenswrapper[4760]: E0226 09:48:51.019987 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5f41609_3893_4649_be8b_2a3c839f082a.slice/crio-6908796f6ae8e41cf4f193efa49c7aeb824d1c5d4e37f4b9dddf6374ffbb8aa6\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5f41609_3893_4649_be8b_2a3c839f082a.slice\": RecentStats: unable to find data in memory cache]" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.042766 4760 scope.go:117] "RemoveContainer" containerID="aee3c8e6f5fa71fe09f40c626762322dcade2b7b13f43b3d7035ef081c7fb530" Feb 26 09:48:51 crc kubenswrapper[4760]: E0226 09:48:51.043379 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aee3c8e6f5fa71fe09f40c626762322dcade2b7b13f43b3d7035ef081c7fb530\": container with ID starting with aee3c8e6f5fa71fe09f40c626762322dcade2b7b13f43b3d7035ef081c7fb530 not found: ID does not exist" containerID="aee3c8e6f5fa71fe09f40c626762322dcade2b7b13f43b3d7035ef081c7fb530" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.043458 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aee3c8e6f5fa71fe09f40c626762322dcade2b7b13f43b3d7035ef081c7fb530"} err="failed to get container status \"aee3c8e6f5fa71fe09f40c626762322dcade2b7b13f43b3d7035ef081c7fb530\": rpc error: code = NotFound desc = could not find container \"aee3c8e6f5fa71fe09f40c626762322dcade2b7b13f43b3d7035ef081c7fb530\": container with ID starting with aee3c8e6f5fa71fe09f40c626762322dcade2b7b13f43b3d7035ef081c7fb530 not found: ID does not exist" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.043512 4760 scope.go:117] "RemoveContainer" containerID="9f4f9adb9ce8c755ff5c812a71ec7588da16e7e0c0b5124e85b5ca9c50b7bedc" Feb 26 09:48:51 crc kubenswrapper[4760]: E0226 09:48:51.043934 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f4f9adb9ce8c755ff5c812a71ec7588da16e7e0c0b5124e85b5ca9c50b7bedc\": container with ID starting with 9f4f9adb9ce8c755ff5c812a71ec7588da16e7e0c0b5124e85b5ca9c50b7bedc not found: ID does not exist" containerID="9f4f9adb9ce8c755ff5c812a71ec7588da16e7e0c0b5124e85b5ca9c50b7bedc" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.043989 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f4f9adb9ce8c755ff5c812a71ec7588da16e7e0c0b5124e85b5ca9c50b7bedc"} err="failed to get container status \"9f4f9adb9ce8c755ff5c812a71ec7588da16e7e0c0b5124e85b5ca9c50b7bedc\": rpc error: code = NotFound desc = could not find container \"9f4f9adb9ce8c755ff5c812a71ec7588da16e7e0c0b5124e85b5ca9c50b7bedc\": container with ID starting with 9f4f9adb9ce8c755ff5c812a71ec7588da16e7e0c0b5124e85b5ca9c50b7bedc not found: ID does not exist" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.044127 4760 scope.go:117] "RemoveContainer" containerID="4e142c57e4454fb7d885fb66e1ffe9c7a7b86316b63cd0bfc1b2d8067e58bdb6" Feb 26 09:48:51 crc kubenswrapper[4760]: E0226 09:48:51.044809 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e142c57e4454fb7d885fb66e1ffe9c7a7b86316b63cd0bfc1b2d8067e58bdb6\": container with ID starting with 4e142c57e4454fb7d885fb66e1ffe9c7a7b86316b63cd0bfc1b2d8067e58bdb6 not found: ID does not exist" containerID="4e142c57e4454fb7d885fb66e1ffe9c7a7b86316b63cd0bfc1b2d8067e58bdb6" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.044858 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e142c57e4454fb7d885fb66e1ffe9c7a7b86316b63cd0bfc1b2d8067e58bdb6"} err="failed to get container status \"4e142c57e4454fb7d885fb66e1ffe9c7a7b86316b63cd0bfc1b2d8067e58bdb6\": rpc error: code = NotFound desc = could not find container \"4e142c57e4454fb7d885fb66e1ffe9c7a7b86316b63cd0bfc1b2d8067e58bdb6\": container with ID starting with 4e142c57e4454fb7d885fb66e1ffe9c7a7b86316b63cd0bfc1b2d8067e58bdb6 not found: ID does not exist" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.044881 4760 scope.go:117] "RemoveContainer" containerID="e4c81f3aebfb86a7e1ec7ab276758555adee92a19798e1dc831f7aa19b1d4b17" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.062030 4760 scope.go:117] "RemoveContainer" containerID="e4c81f3aebfb86a7e1ec7ab276758555adee92a19798e1dc831f7aa19b1d4b17" Feb 26 09:48:51 crc kubenswrapper[4760]: E0226 09:48:51.062793 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4c81f3aebfb86a7e1ec7ab276758555adee92a19798e1dc831f7aa19b1d4b17\": container with ID starting with e4c81f3aebfb86a7e1ec7ab276758555adee92a19798e1dc831f7aa19b1d4b17 not found: ID does not exist" containerID="e4c81f3aebfb86a7e1ec7ab276758555adee92a19798e1dc831f7aa19b1d4b17" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.062827 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4c81f3aebfb86a7e1ec7ab276758555adee92a19798e1dc831f7aa19b1d4b17"} err="failed to get container status \"e4c81f3aebfb86a7e1ec7ab276758555adee92a19798e1dc831f7aa19b1d4b17\": rpc error: code = NotFound desc = could not find container \"e4c81f3aebfb86a7e1ec7ab276758555adee92a19798e1dc831f7aa19b1d4b17\": container with ID starting with e4c81f3aebfb86a7e1ec7ab276758555adee92a19798e1dc831f7aa19b1d4b17 not found: ID does not exist" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.062850 4760 scope.go:117] "RemoveContainer" containerID="acdd23263c194f3dc8c6283723c90fc0d4b9459bd0ad7924fce5f189d90547d7" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.088347 4760 scope.go:117] "RemoveContainer" containerID="e50fcd05ea0665817db0eec200847ef47bd58537796dff8af9635bbe1e5fb73f" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.103931 4760 scope.go:117] "RemoveContainer" containerID="b51f8bfc43b353509f9a0f4a77ea423784355620183f6ac96ed47c21da77a606" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.116427 4760 scope.go:117] "RemoveContainer" containerID="acdd23263c194f3dc8c6283723c90fc0d4b9459bd0ad7924fce5f189d90547d7" Feb 26 09:48:51 crc kubenswrapper[4760]: E0226 09:48:51.116845 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acdd23263c194f3dc8c6283723c90fc0d4b9459bd0ad7924fce5f189d90547d7\": container with ID starting with acdd23263c194f3dc8c6283723c90fc0d4b9459bd0ad7924fce5f189d90547d7 not found: ID does not exist" containerID="acdd23263c194f3dc8c6283723c90fc0d4b9459bd0ad7924fce5f189d90547d7" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.116887 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acdd23263c194f3dc8c6283723c90fc0d4b9459bd0ad7924fce5f189d90547d7"} err="failed to get container status \"acdd23263c194f3dc8c6283723c90fc0d4b9459bd0ad7924fce5f189d90547d7\": rpc error: code = NotFound desc = could not find container \"acdd23263c194f3dc8c6283723c90fc0d4b9459bd0ad7924fce5f189d90547d7\": container with ID starting with acdd23263c194f3dc8c6283723c90fc0d4b9459bd0ad7924fce5f189d90547d7 not found: ID does not exist" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.116916 4760 scope.go:117] "RemoveContainer" containerID="e50fcd05ea0665817db0eec200847ef47bd58537796dff8af9635bbe1e5fb73f" Feb 26 09:48:51 crc kubenswrapper[4760]: E0226 09:48:51.117233 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e50fcd05ea0665817db0eec200847ef47bd58537796dff8af9635bbe1e5fb73f\": container with ID starting with e50fcd05ea0665817db0eec200847ef47bd58537796dff8af9635bbe1e5fb73f not found: ID does not exist" containerID="e50fcd05ea0665817db0eec200847ef47bd58537796dff8af9635bbe1e5fb73f" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.117291 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e50fcd05ea0665817db0eec200847ef47bd58537796dff8af9635bbe1e5fb73f"} err="failed to get container status \"e50fcd05ea0665817db0eec200847ef47bd58537796dff8af9635bbe1e5fb73f\": rpc error: code = NotFound desc = could not find container \"e50fcd05ea0665817db0eec200847ef47bd58537796dff8af9635bbe1e5fb73f\": container with ID starting with e50fcd05ea0665817db0eec200847ef47bd58537796dff8af9635bbe1e5fb73f not found: ID does not exist" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.117325 4760 scope.go:117] "RemoveContainer" containerID="b51f8bfc43b353509f9a0f4a77ea423784355620183f6ac96ed47c21da77a606" Feb 26 09:48:51 crc kubenswrapper[4760]: E0226 09:48:51.117788 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b51f8bfc43b353509f9a0f4a77ea423784355620183f6ac96ed47c21da77a606\": container with ID starting with b51f8bfc43b353509f9a0f4a77ea423784355620183f6ac96ed47c21da77a606 not found: ID does not exist" containerID="b51f8bfc43b353509f9a0f4a77ea423784355620183f6ac96ed47c21da77a606" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.117825 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b51f8bfc43b353509f9a0f4a77ea423784355620183f6ac96ed47c21da77a606"} err="failed to get container status \"b51f8bfc43b353509f9a0f4a77ea423784355620183f6ac96ed47c21da77a606\": rpc error: code = NotFound desc = could not find container \"b51f8bfc43b353509f9a0f4a77ea423784355620183f6ac96ed47c21da77a606\": container with ID starting with b51f8bfc43b353509f9a0f4a77ea423784355620183f6ac96ed47c21da77a606 not found: ID does not exist" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.117845 4760 scope.go:117] "RemoveContainer" containerID="697aee0be4ddbe516c9cbce184bf48d955117c379230ffb50c8841dc2612d4dc" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.135233 4760 scope.go:117] "RemoveContainer" containerID="38c1e30efef3235fb1a3ce151ba35ea14cfd838459687e1854eb5b082d6db2c0" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.151077 4760 scope.go:117] "RemoveContainer" containerID="3ae7f1ac0bc0a7f0e9968b5ef9b4fcb2f804ce7ca5fc50f3a86f751b90d0c13c" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.165023 4760 scope.go:117] "RemoveContainer" containerID="697aee0be4ddbe516c9cbce184bf48d955117c379230ffb50c8841dc2612d4dc" Feb 26 09:48:51 crc kubenswrapper[4760]: E0226 09:48:51.165505 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"697aee0be4ddbe516c9cbce184bf48d955117c379230ffb50c8841dc2612d4dc\": container with ID starting with 697aee0be4ddbe516c9cbce184bf48d955117c379230ffb50c8841dc2612d4dc not found: ID does not exist" containerID="697aee0be4ddbe516c9cbce184bf48d955117c379230ffb50c8841dc2612d4dc" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.165544 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"697aee0be4ddbe516c9cbce184bf48d955117c379230ffb50c8841dc2612d4dc"} err="failed to get container status \"697aee0be4ddbe516c9cbce184bf48d955117c379230ffb50c8841dc2612d4dc\": rpc error: code = NotFound desc = could not find container \"697aee0be4ddbe516c9cbce184bf48d955117c379230ffb50c8841dc2612d4dc\": container with ID starting with 697aee0be4ddbe516c9cbce184bf48d955117c379230ffb50c8841dc2612d4dc not found: ID does not exist" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.165589 4760 scope.go:117] "RemoveContainer" containerID="38c1e30efef3235fb1a3ce151ba35ea14cfd838459687e1854eb5b082d6db2c0" Feb 26 09:48:51 crc kubenswrapper[4760]: E0226 09:48:51.166007 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38c1e30efef3235fb1a3ce151ba35ea14cfd838459687e1854eb5b082d6db2c0\": container with ID starting with 38c1e30efef3235fb1a3ce151ba35ea14cfd838459687e1854eb5b082d6db2c0 not found: ID does not exist" containerID="38c1e30efef3235fb1a3ce151ba35ea14cfd838459687e1854eb5b082d6db2c0" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.166040 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38c1e30efef3235fb1a3ce151ba35ea14cfd838459687e1854eb5b082d6db2c0"} err="failed to get container status \"38c1e30efef3235fb1a3ce151ba35ea14cfd838459687e1854eb5b082d6db2c0\": rpc error: code = NotFound desc = could not find container \"38c1e30efef3235fb1a3ce151ba35ea14cfd838459687e1854eb5b082d6db2c0\": container with ID starting with 38c1e30efef3235fb1a3ce151ba35ea14cfd838459687e1854eb5b082d6db2c0 not found: ID does not exist" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.166061 4760 scope.go:117] "RemoveContainer" containerID="3ae7f1ac0bc0a7f0e9968b5ef9b4fcb2f804ce7ca5fc50f3a86f751b90d0c13c" Feb 26 09:48:51 crc kubenswrapper[4760]: E0226 09:48:51.166508 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ae7f1ac0bc0a7f0e9968b5ef9b4fcb2f804ce7ca5fc50f3a86f751b90d0c13c\": container with ID starting with 3ae7f1ac0bc0a7f0e9968b5ef9b4fcb2f804ce7ca5fc50f3a86f751b90d0c13c not found: ID does not exist" containerID="3ae7f1ac0bc0a7f0e9968b5ef9b4fcb2f804ce7ca5fc50f3a86f751b90d0c13c" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.166539 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ae7f1ac0bc0a7f0e9968b5ef9b4fcb2f804ce7ca5fc50f3a86f751b90d0c13c"} err="failed to get container status \"3ae7f1ac0bc0a7f0e9968b5ef9b4fcb2f804ce7ca5fc50f3a86f751b90d0c13c\": rpc error: code = NotFound desc = could not find container \"3ae7f1ac0bc0a7f0e9968b5ef9b4fcb2f804ce7ca5fc50f3a86f751b90d0c13c\": container with ID starting with 3ae7f1ac0bc0a7f0e9968b5ef9b4fcb2f804ce7ca5fc50f3a86f751b90d0c13c not found: ID does not exist" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.166559 4760 scope.go:117] "RemoveContainer" containerID="9a50be6219ea226f2f109703705f10e2658a981a41b2011f3137efea67583316" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.182567 4760 scope.go:117] "RemoveContainer" containerID="f367bc7c9b2544818752741b517756d6e5ac5d8e28fdde1f51f901dc11977312" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.200353 4760 scope.go:117] "RemoveContainer" containerID="4b3b75379e12fe6238455fc2df2a92954020154d37cd285e07580dbec20398d2" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.216173 4760 scope.go:117] "RemoveContainer" containerID="9a50be6219ea226f2f109703705f10e2658a981a41b2011f3137efea67583316" Feb 26 09:48:51 crc kubenswrapper[4760]: E0226 09:48:51.216890 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a50be6219ea226f2f109703705f10e2658a981a41b2011f3137efea67583316\": container with ID starting with 9a50be6219ea226f2f109703705f10e2658a981a41b2011f3137efea67583316 not found: ID does not exist" containerID="9a50be6219ea226f2f109703705f10e2658a981a41b2011f3137efea67583316" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.216942 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a50be6219ea226f2f109703705f10e2658a981a41b2011f3137efea67583316"} err="failed to get container status \"9a50be6219ea226f2f109703705f10e2658a981a41b2011f3137efea67583316\": rpc error: code = NotFound desc = could not find container \"9a50be6219ea226f2f109703705f10e2658a981a41b2011f3137efea67583316\": container with ID starting with 9a50be6219ea226f2f109703705f10e2658a981a41b2011f3137efea67583316 not found: ID does not exist" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.217037 4760 scope.go:117] "RemoveContainer" containerID="f367bc7c9b2544818752741b517756d6e5ac5d8e28fdde1f51f901dc11977312" Feb 26 09:48:51 crc kubenswrapper[4760]: E0226 09:48:51.217679 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f367bc7c9b2544818752741b517756d6e5ac5d8e28fdde1f51f901dc11977312\": container with ID starting with f367bc7c9b2544818752741b517756d6e5ac5d8e28fdde1f51f901dc11977312 not found: ID does not exist" containerID="f367bc7c9b2544818752741b517756d6e5ac5d8e28fdde1f51f901dc11977312" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.217709 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f367bc7c9b2544818752741b517756d6e5ac5d8e28fdde1f51f901dc11977312"} err="failed to get container status \"f367bc7c9b2544818752741b517756d6e5ac5d8e28fdde1f51f901dc11977312\": rpc error: code = NotFound desc = could not find container \"f367bc7c9b2544818752741b517756d6e5ac5d8e28fdde1f51f901dc11977312\": container with ID starting with f367bc7c9b2544818752741b517756d6e5ac5d8e28fdde1f51f901dc11977312 not found: ID does not exist" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.217724 4760 scope.go:117] "RemoveContainer" containerID="4b3b75379e12fe6238455fc2df2a92954020154d37cd285e07580dbec20398d2" Feb 26 09:48:51 crc kubenswrapper[4760]: E0226 09:48:51.218291 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b3b75379e12fe6238455fc2df2a92954020154d37cd285e07580dbec20398d2\": container with ID starting with 4b3b75379e12fe6238455fc2df2a92954020154d37cd285e07580dbec20398d2 not found: ID does not exist" containerID="4b3b75379e12fe6238455fc2df2a92954020154d37cd285e07580dbec20398d2" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.218324 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b3b75379e12fe6238455fc2df2a92954020154d37cd285e07580dbec20398d2"} err="failed to get container status \"4b3b75379e12fe6238455fc2df2a92954020154d37cd285e07580dbec20398d2\": rpc error: code = NotFound desc = could not find container \"4b3b75379e12fe6238455fc2df2a92954020154d37cd285e07580dbec20398d2\": container with ID starting with 4b3b75379e12fe6238455fc2df2a92954020154d37cd285e07580dbec20398d2 not found: ID does not exist" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.218360 4760 scope.go:117] "RemoveContainer" containerID="264287c3e005a83717e66b131242ac9576d3fb565df6ac652b013ac2ea5af2dd" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.240069 4760 scope.go:117] "RemoveContainer" containerID="8edf2df50e7cb27a084b8f74111a736461187405ce794854bbcf96af1064ce61" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.257187 4760 scope.go:117] "RemoveContainer" containerID="f426acd90bce33ef4d893b9a4bbde6d22d2085c51bdfc538fbf524042f76024b" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.276440 4760 scope.go:117] "RemoveContainer" containerID="264287c3e005a83717e66b131242ac9576d3fb565df6ac652b013ac2ea5af2dd" Feb 26 09:48:51 crc kubenswrapper[4760]: E0226 09:48:51.277002 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"264287c3e005a83717e66b131242ac9576d3fb565df6ac652b013ac2ea5af2dd\": container with ID starting with 264287c3e005a83717e66b131242ac9576d3fb565df6ac652b013ac2ea5af2dd not found: ID does not exist" containerID="264287c3e005a83717e66b131242ac9576d3fb565df6ac652b013ac2ea5af2dd" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.277041 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"264287c3e005a83717e66b131242ac9576d3fb565df6ac652b013ac2ea5af2dd"} err="failed to get container status \"264287c3e005a83717e66b131242ac9576d3fb565df6ac652b013ac2ea5af2dd\": rpc error: code = NotFound desc = could not find container \"264287c3e005a83717e66b131242ac9576d3fb565df6ac652b013ac2ea5af2dd\": container with ID starting with 264287c3e005a83717e66b131242ac9576d3fb565df6ac652b013ac2ea5af2dd not found: ID does not exist" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.277066 4760 scope.go:117] "RemoveContainer" containerID="8edf2df50e7cb27a084b8f74111a736461187405ce794854bbcf96af1064ce61" Feb 26 09:48:51 crc kubenswrapper[4760]: E0226 09:48:51.277448 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8edf2df50e7cb27a084b8f74111a736461187405ce794854bbcf96af1064ce61\": container with ID starting with 8edf2df50e7cb27a084b8f74111a736461187405ce794854bbcf96af1064ce61 not found: ID does not exist" containerID="8edf2df50e7cb27a084b8f74111a736461187405ce794854bbcf96af1064ce61" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.277473 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8edf2df50e7cb27a084b8f74111a736461187405ce794854bbcf96af1064ce61"} err="failed to get container status \"8edf2df50e7cb27a084b8f74111a736461187405ce794854bbcf96af1064ce61\": rpc error: code = NotFound desc = could not find container \"8edf2df50e7cb27a084b8f74111a736461187405ce794854bbcf96af1064ce61\": container with ID starting with 8edf2df50e7cb27a084b8f74111a736461187405ce794854bbcf96af1064ce61 not found: ID does not exist" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.277492 4760 scope.go:117] "RemoveContainer" containerID="f426acd90bce33ef4d893b9a4bbde6d22d2085c51bdfc538fbf524042f76024b" Feb 26 09:48:51 crc kubenswrapper[4760]: E0226 09:48:51.277879 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f426acd90bce33ef4d893b9a4bbde6d22d2085c51bdfc538fbf524042f76024b\": container with ID starting with f426acd90bce33ef4d893b9a4bbde6d22d2085c51bdfc538fbf524042f76024b not found: ID does not exist" containerID="f426acd90bce33ef4d893b9a4bbde6d22d2085c51bdfc538fbf524042f76024b" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.277907 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f426acd90bce33ef4d893b9a4bbde6d22d2085c51bdfc538fbf524042f76024b"} err="failed to get container status \"f426acd90bce33ef4d893b9a4bbde6d22d2085c51bdfc538fbf524042f76024b\": rpc error: code = NotFound desc = could not find container \"f426acd90bce33ef4d893b9a4bbde6d22d2085c51bdfc538fbf524042f76024b\": container with ID starting with f426acd90bce33ef4d893b9a4bbde6d22d2085c51bdfc538fbf524042f76024b not found: ID does not exist" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.679959 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-x5m47" event={"ID":"4c6cbd20-61bb-4ded-a190-d688c267849f","Type":"ContainerStarted","Data":"5b9dba07c6e51add9af4133f56e3d5ab6d49d1b8711513cc978ffab35fbe8aef"} Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.680028 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-x5m47" event={"ID":"4c6cbd20-61bb-4ded-a190-d688c267849f","Type":"ContainerStarted","Data":"d7e4e6eae25f0a38c56417e6b388bd2214a32c5353f2d36a508d1b631c408bc3"} Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.680344 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-x5m47" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.684670 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-x5m47" Feb 26 09:48:51 crc kubenswrapper[4760]: I0226 09:48:51.702658 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-x5m47" podStartSLOduration=2.702632359 podStartE2EDuration="2.702632359s" podCreationTimestamp="2026-02-26 09:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 09:48:51.698862796 +0000 UTC m=+376.832808299" watchObservedRunningTime="2026-02-26 09:48:51.702632359 +0000 UTC m=+376.836577852" Feb 26 09:48:52 crc kubenswrapper[4760]: I0226 09:48:52.590411 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0726f0c9-0bc5-42b5-bb78-af77ad91ecbb" path="/var/lib/kubelet/pods/0726f0c9-0bc5-42b5-bb78-af77ad91ecbb/volumes" Feb 26 09:48:52 crc kubenswrapper[4760]: I0226 09:48:52.591260 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" path="/var/lib/kubelet/pods/1e32cadf-ce42-42fd-85de-7cfd1fd43dea/volumes" Feb 26 09:48:52 crc kubenswrapper[4760]: I0226 09:48:52.591922 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" path="/var/lib/kubelet/pods/3e598e10-dd81-4dce-ad36-a44df83ae7fd/volumes" Feb 26 09:48:52 crc kubenswrapper[4760]: I0226 09:48:52.593125 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" path="/var/lib/kubelet/pods/5b918bed-a785-4a4d-a784-0860bdbadadf/volumes" Feb 26 09:48:52 crc kubenswrapper[4760]: I0226 09:48:52.595298 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" path="/var/lib/kubelet/pods/6ee6a724-49ab-489e-84b5-cc2f96c89dc2/volumes" Feb 26 09:48:52 crc kubenswrapper[4760]: I0226 09:48:52.595963 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" path="/var/lib/kubelet/pods/7427c503-5c81-488e-b0f0-61b2537a96a4/volumes" Feb 26 09:48:52 crc kubenswrapper[4760]: I0226 09:48:52.597104 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" path="/var/lib/kubelet/pods/919bb2ab-9fbf-4a58-835e-8348eebaf093/volumes" Feb 26 09:48:52 crc kubenswrapper[4760]: I0226 09:48:52.597854 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bedbd455-baad-4b56-86b7-1d851407744b" path="/var/lib/kubelet/pods/bedbd455-baad-4b56-86b7-1d851407744b/volumes" Feb 26 09:48:52 crc kubenswrapper[4760]: I0226 09:48:52.599001 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" path="/var/lib/kubelet/pods/d5f41609-3893-4649-be8b-2a3c839f082a/volumes" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.892189 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4td4d"] Feb 26 09:48:55 crc kubenswrapper[4760]: E0226 09:48:55.892946 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" containerName="extract-content" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.892962 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" containerName="extract-content" Feb 26 09:48:55 crc kubenswrapper[4760]: E0226 09:48:55.892976 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" containerName="registry-server" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.892983 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" containerName="registry-server" Feb 26 09:48:55 crc kubenswrapper[4760]: E0226 09:48:55.892998 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" containerName="extract-utilities" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893008 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" containerName="extract-utilities" Feb 26 09:48:55 crc kubenswrapper[4760]: E0226 09:48:55.893020 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" containerName="extract-content" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893027 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" containerName="extract-content" Feb 26 09:48:55 crc kubenswrapper[4760]: E0226 09:48:55.893038 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bedbd455-baad-4b56-86b7-1d851407744b" containerName="registry-server" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893046 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="bedbd455-baad-4b56-86b7-1d851407744b" containerName="registry-server" Feb 26 09:48:55 crc kubenswrapper[4760]: E0226 09:48:55.893057 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" containerName="extract-utilities" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893064 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" containerName="extract-utilities" Feb 26 09:48:55 crc kubenswrapper[4760]: E0226 09:48:55.893079 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" containerName="extract-utilities" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893086 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" containerName="extract-utilities" Feb 26 09:48:55 crc kubenswrapper[4760]: E0226 09:48:55.893097 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bedbd455-baad-4b56-86b7-1d851407744b" containerName="extract-content" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893105 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="bedbd455-baad-4b56-86b7-1d851407744b" containerName="extract-content" Feb 26 09:48:55 crc kubenswrapper[4760]: E0226 09:48:55.893114 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" containerName="extract-content" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893122 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" containerName="extract-content" Feb 26 09:48:55 crc kubenswrapper[4760]: E0226 09:48:55.893131 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" containerName="extract-content" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893139 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" containerName="extract-content" Feb 26 09:48:55 crc kubenswrapper[4760]: E0226 09:48:55.893148 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" containerName="registry-server" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893156 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" containerName="registry-server" Feb 26 09:48:55 crc kubenswrapper[4760]: E0226 09:48:55.893165 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" containerName="extract-content" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893175 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" containerName="extract-content" Feb 26 09:48:55 crc kubenswrapper[4760]: E0226 09:48:55.893184 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" containerName="extract-content" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893191 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" containerName="extract-content" Feb 26 09:48:55 crc kubenswrapper[4760]: E0226 09:48:55.893200 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0726f0c9-0bc5-42b5-bb78-af77ad91ecbb" containerName="marketplace-operator" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893208 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0726f0c9-0bc5-42b5-bb78-af77ad91ecbb" containerName="marketplace-operator" Feb 26 09:48:55 crc kubenswrapper[4760]: E0226 09:48:55.893217 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" containerName="registry-server" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893224 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" containerName="registry-server" Feb 26 09:48:55 crc kubenswrapper[4760]: E0226 09:48:55.893234 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" containerName="registry-server" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893242 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" containerName="registry-server" Feb 26 09:48:55 crc kubenswrapper[4760]: E0226 09:48:55.893254 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" containerName="extract-utilities" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893266 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" containerName="extract-utilities" Feb 26 09:48:55 crc kubenswrapper[4760]: E0226 09:48:55.893275 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" containerName="extract-content" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893282 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" containerName="extract-content" Feb 26 09:48:55 crc kubenswrapper[4760]: E0226 09:48:55.893292 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" containerName="registry-server" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893300 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" containerName="registry-server" Feb 26 09:48:55 crc kubenswrapper[4760]: E0226 09:48:55.893310 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" containerName="extract-utilities" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893317 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" containerName="extract-utilities" Feb 26 09:48:55 crc kubenswrapper[4760]: E0226 09:48:55.893326 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" containerName="extract-utilities" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893335 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" containerName="extract-utilities" Feb 26 09:48:55 crc kubenswrapper[4760]: E0226 09:48:55.893343 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" containerName="extract-utilities" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893350 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" containerName="extract-utilities" Feb 26 09:48:55 crc kubenswrapper[4760]: E0226 09:48:55.893358 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" containerName="registry-server" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893365 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" containerName="registry-server" Feb 26 09:48:55 crc kubenswrapper[4760]: E0226 09:48:55.893375 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bedbd455-baad-4b56-86b7-1d851407744b" containerName="extract-utilities" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893383 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="bedbd455-baad-4b56-86b7-1d851407744b" containerName="extract-utilities" Feb 26 09:48:55 crc kubenswrapper[4760]: E0226 09:48:55.893393 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" containerName="registry-server" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893401 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" containerName="registry-server" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893506 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e598e10-dd81-4dce-ad36-a44df83ae7fd" containerName="registry-server" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893522 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="919bb2ab-9fbf-4a58-835e-8348eebaf093" containerName="registry-server" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893534 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0726f0c9-0bc5-42b5-bb78-af77ad91ecbb" containerName="marketplace-operator" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893562 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b918bed-a785-4a4d-a784-0860bdbadadf" containerName="registry-server" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893589 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5f41609-3893-4649-be8b-2a3c839f082a" containerName="registry-server" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893600 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7427c503-5c81-488e-b0f0-61b2537a96a4" containerName="registry-server" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893610 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e32cadf-ce42-42fd-85de-7cfd1fd43dea" containerName="registry-server" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893620 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="bedbd455-baad-4b56-86b7-1d851407744b" containerName="registry-server" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.893628 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ee6a724-49ab-489e-84b5-cc2f96c89dc2" containerName="registry-server" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.895202 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4td4d" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.898985 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 26 09:48:55 crc kubenswrapper[4760]: I0226 09:48:55.904050 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4td4d"] Feb 26 09:48:56 crc kubenswrapper[4760]: I0226 09:48:56.038545 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d873fd3-41bb-4134-8b88-2ac414df58cf-catalog-content\") pod \"certified-operators-4td4d\" (UID: \"2d873fd3-41bb-4134-8b88-2ac414df58cf\") " pod="openshift-marketplace/certified-operators-4td4d" Feb 26 09:48:56 crc kubenswrapper[4760]: I0226 09:48:56.038632 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d873fd3-41bb-4134-8b88-2ac414df58cf-utilities\") pod \"certified-operators-4td4d\" (UID: \"2d873fd3-41bb-4134-8b88-2ac414df58cf\") " pod="openshift-marketplace/certified-operators-4td4d" Feb 26 09:48:56 crc kubenswrapper[4760]: I0226 09:48:56.038699 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqds5\" (UniqueName: \"kubernetes.io/projected/2d873fd3-41bb-4134-8b88-2ac414df58cf-kube-api-access-hqds5\") pod \"certified-operators-4td4d\" (UID: \"2d873fd3-41bb-4134-8b88-2ac414df58cf\") " pod="openshift-marketplace/certified-operators-4td4d" Feb 26 09:48:56 crc kubenswrapper[4760]: I0226 09:48:56.088199 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g84hz"] Feb 26 09:48:56 crc kubenswrapper[4760]: I0226 09:48:56.089469 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g84hz" Feb 26 09:48:56 crc kubenswrapper[4760]: I0226 09:48:56.091860 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 26 09:48:56 crc kubenswrapper[4760]: I0226 09:48:56.096781 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g84hz"] Feb 26 09:48:56 crc kubenswrapper[4760]: I0226 09:48:56.140364 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d873fd3-41bb-4134-8b88-2ac414df58cf-utilities\") pod \"certified-operators-4td4d\" (UID: \"2d873fd3-41bb-4134-8b88-2ac414df58cf\") " pod="openshift-marketplace/certified-operators-4td4d" Feb 26 09:48:56 crc kubenswrapper[4760]: I0226 09:48:56.140430 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqds5\" (UniqueName: \"kubernetes.io/projected/2d873fd3-41bb-4134-8b88-2ac414df58cf-kube-api-access-hqds5\") pod \"certified-operators-4td4d\" (UID: \"2d873fd3-41bb-4134-8b88-2ac414df58cf\") " pod="openshift-marketplace/certified-operators-4td4d" Feb 26 09:48:56 crc kubenswrapper[4760]: I0226 09:48:56.140529 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d873fd3-41bb-4134-8b88-2ac414df58cf-catalog-content\") pod \"certified-operators-4td4d\" (UID: \"2d873fd3-41bb-4134-8b88-2ac414df58cf\") " pod="openshift-marketplace/certified-operators-4td4d" Feb 26 09:48:56 crc kubenswrapper[4760]: I0226 09:48:56.141219 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d873fd3-41bb-4134-8b88-2ac414df58cf-utilities\") pod \"certified-operators-4td4d\" (UID: \"2d873fd3-41bb-4134-8b88-2ac414df58cf\") " pod="openshift-marketplace/certified-operators-4td4d" Feb 26 09:48:56 crc kubenswrapper[4760]: I0226 09:48:56.141785 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d873fd3-41bb-4134-8b88-2ac414df58cf-catalog-content\") pod \"certified-operators-4td4d\" (UID: \"2d873fd3-41bb-4134-8b88-2ac414df58cf\") " pod="openshift-marketplace/certified-operators-4td4d" Feb 26 09:48:56 crc kubenswrapper[4760]: I0226 09:48:56.163408 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqds5\" (UniqueName: \"kubernetes.io/projected/2d873fd3-41bb-4134-8b88-2ac414df58cf-kube-api-access-hqds5\") pod \"certified-operators-4td4d\" (UID: \"2d873fd3-41bb-4134-8b88-2ac414df58cf\") " pod="openshift-marketplace/certified-operators-4td4d" Feb 26 09:48:56 crc kubenswrapper[4760]: I0226 09:48:56.227112 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4td4d" Feb 26 09:48:56 crc kubenswrapper[4760]: I0226 09:48:56.241757 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8ca4cad-1954-4a07-aaae-3ec25fd7681b-utilities\") pod \"redhat-marketplace-g84hz\" (UID: \"a8ca4cad-1954-4a07-aaae-3ec25fd7681b\") " pod="openshift-marketplace/redhat-marketplace-g84hz" Feb 26 09:48:56 crc kubenswrapper[4760]: I0226 09:48:56.242039 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cft9f\" (UniqueName: \"kubernetes.io/projected/a8ca4cad-1954-4a07-aaae-3ec25fd7681b-kube-api-access-cft9f\") pod \"redhat-marketplace-g84hz\" (UID: \"a8ca4cad-1954-4a07-aaae-3ec25fd7681b\") " pod="openshift-marketplace/redhat-marketplace-g84hz" Feb 26 09:48:56 crc kubenswrapper[4760]: I0226 09:48:56.242067 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8ca4cad-1954-4a07-aaae-3ec25fd7681b-catalog-content\") pod \"redhat-marketplace-g84hz\" (UID: \"a8ca4cad-1954-4a07-aaae-3ec25fd7681b\") " pod="openshift-marketplace/redhat-marketplace-g84hz" Feb 26 09:48:56 crc kubenswrapper[4760]: I0226 09:48:56.343552 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8ca4cad-1954-4a07-aaae-3ec25fd7681b-utilities\") pod \"redhat-marketplace-g84hz\" (UID: \"a8ca4cad-1954-4a07-aaae-3ec25fd7681b\") " pod="openshift-marketplace/redhat-marketplace-g84hz" Feb 26 09:48:56 crc kubenswrapper[4760]: I0226 09:48:56.343652 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cft9f\" (UniqueName: \"kubernetes.io/projected/a8ca4cad-1954-4a07-aaae-3ec25fd7681b-kube-api-access-cft9f\") pod \"redhat-marketplace-g84hz\" (UID: \"a8ca4cad-1954-4a07-aaae-3ec25fd7681b\") " pod="openshift-marketplace/redhat-marketplace-g84hz" Feb 26 09:48:56 crc kubenswrapper[4760]: I0226 09:48:56.343675 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8ca4cad-1954-4a07-aaae-3ec25fd7681b-catalog-content\") pod \"redhat-marketplace-g84hz\" (UID: \"a8ca4cad-1954-4a07-aaae-3ec25fd7681b\") " pod="openshift-marketplace/redhat-marketplace-g84hz" Feb 26 09:48:56 crc kubenswrapper[4760]: I0226 09:48:56.344121 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8ca4cad-1954-4a07-aaae-3ec25fd7681b-catalog-content\") pod \"redhat-marketplace-g84hz\" (UID: \"a8ca4cad-1954-4a07-aaae-3ec25fd7681b\") " pod="openshift-marketplace/redhat-marketplace-g84hz" Feb 26 09:48:56 crc kubenswrapper[4760]: I0226 09:48:56.344252 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8ca4cad-1954-4a07-aaae-3ec25fd7681b-utilities\") pod \"redhat-marketplace-g84hz\" (UID: \"a8ca4cad-1954-4a07-aaae-3ec25fd7681b\") " pod="openshift-marketplace/redhat-marketplace-g84hz" Feb 26 09:48:56 crc kubenswrapper[4760]: I0226 09:48:56.368626 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cft9f\" (UniqueName: \"kubernetes.io/projected/a8ca4cad-1954-4a07-aaae-3ec25fd7681b-kube-api-access-cft9f\") pod \"redhat-marketplace-g84hz\" (UID: \"a8ca4cad-1954-4a07-aaae-3ec25fd7681b\") " pod="openshift-marketplace/redhat-marketplace-g84hz" Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:56.411514 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g84hz" Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:56.636656 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4td4d"] Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:56.721015 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4td4d" event={"ID":"2d873fd3-41bb-4134-8b88-2ac414df58cf","Type":"ContainerStarted","Data":"cb0bbfe133893dae9b25f690235de767decf9f31e55fdf4b316b5b865e2cef29"} Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:57.729717 4760 generic.go:334] "Generic (PLEG): container finished" podID="2d873fd3-41bb-4134-8b88-2ac414df58cf" containerID="a74b91768853834566eb8bb57f765d4d9f89351f92d42118f651e6f5a6d9e83d" exitCode=0 Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:57.730027 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4td4d" event={"ID":"2d873fd3-41bb-4134-8b88-2ac414df58cf","Type":"ContainerDied","Data":"a74b91768853834566eb8bb57f765d4d9f89351f92d42118f651e6f5a6d9e83d"} Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:58.287214 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h5tjf"] Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:58.288485 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5tjf" Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:58.291156 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:58.300417 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h5tjf"] Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:58.480298 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f90c2ca-6715-40ab-838c-f4042cca4a49-utilities\") pod \"community-operators-h5tjf\" (UID: \"6f90c2ca-6715-40ab-838c-f4042cca4a49\") " pod="openshift-marketplace/community-operators-h5tjf" Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:58.480345 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f90c2ca-6715-40ab-838c-f4042cca4a49-catalog-content\") pod \"community-operators-h5tjf\" (UID: \"6f90c2ca-6715-40ab-838c-f4042cca4a49\") " pod="openshift-marketplace/community-operators-h5tjf" Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:58.480379 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmgp2\" (UniqueName: \"kubernetes.io/projected/6f90c2ca-6715-40ab-838c-f4042cca4a49-kube-api-access-zmgp2\") pod \"community-operators-h5tjf\" (UID: \"6f90c2ca-6715-40ab-838c-f4042cca4a49\") " pod="openshift-marketplace/community-operators-h5tjf" Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:58.492933 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l5zzk"] Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:58.494674 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l5zzk" Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:58.499148 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:58.501434 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l5zzk"] Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:58.582038 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f90c2ca-6715-40ab-838c-f4042cca4a49-utilities\") pod \"community-operators-h5tjf\" (UID: \"6f90c2ca-6715-40ab-838c-f4042cca4a49\") " pod="openshift-marketplace/community-operators-h5tjf" Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:58.582429 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f90c2ca-6715-40ab-838c-f4042cca4a49-catalog-content\") pod \"community-operators-h5tjf\" (UID: \"6f90c2ca-6715-40ab-838c-f4042cca4a49\") " pod="openshift-marketplace/community-operators-h5tjf" Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:58.582500 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmgp2\" (UniqueName: \"kubernetes.io/projected/6f90c2ca-6715-40ab-838c-f4042cca4a49-kube-api-access-zmgp2\") pod \"community-operators-h5tjf\" (UID: \"6f90c2ca-6715-40ab-838c-f4042cca4a49\") " pod="openshift-marketplace/community-operators-h5tjf" Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:58.582943 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f90c2ca-6715-40ab-838c-f4042cca4a49-catalog-content\") pod \"community-operators-h5tjf\" (UID: \"6f90c2ca-6715-40ab-838c-f4042cca4a49\") " pod="openshift-marketplace/community-operators-h5tjf" Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:58.583032 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f90c2ca-6715-40ab-838c-f4042cca4a49-utilities\") pod \"community-operators-h5tjf\" (UID: \"6f90c2ca-6715-40ab-838c-f4042cca4a49\") " pod="openshift-marketplace/community-operators-h5tjf" Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:58.602118 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmgp2\" (UniqueName: \"kubernetes.io/projected/6f90c2ca-6715-40ab-838c-f4042cca4a49-kube-api-access-zmgp2\") pod \"community-operators-h5tjf\" (UID: \"6f90c2ca-6715-40ab-838c-f4042cca4a49\") " pod="openshift-marketplace/community-operators-h5tjf" Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:58.624803 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5tjf" Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:58.683266 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ztr2\" (UniqueName: \"kubernetes.io/projected/ccc24792-d90d-4d64-b056-945beaadc57f-kube-api-access-2ztr2\") pod \"redhat-operators-l5zzk\" (UID: \"ccc24792-d90d-4d64-b056-945beaadc57f\") " pod="openshift-marketplace/redhat-operators-l5zzk" Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:58.683306 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccc24792-d90d-4d64-b056-945beaadc57f-catalog-content\") pod \"redhat-operators-l5zzk\" (UID: \"ccc24792-d90d-4d64-b056-945beaadc57f\") " pod="openshift-marketplace/redhat-operators-l5zzk" Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:58.683352 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccc24792-d90d-4d64-b056-945beaadc57f-utilities\") pod \"redhat-operators-l5zzk\" (UID: \"ccc24792-d90d-4d64-b056-945beaadc57f\") " pod="openshift-marketplace/redhat-operators-l5zzk" Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:58.784729 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccc24792-d90d-4d64-b056-945beaadc57f-utilities\") pod \"redhat-operators-l5zzk\" (UID: \"ccc24792-d90d-4d64-b056-945beaadc57f\") " pod="openshift-marketplace/redhat-operators-l5zzk" Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:58.784806 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ztr2\" (UniqueName: \"kubernetes.io/projected/ccc24792-d90d-4d64-b056-945beaadc57f-kube-api-access-2ztr2\") pod \"redhat-operators-l5zzk\" (UID: \"ccc24792-d90d-4d64-b056-945beaadc57f\") " pod="openshift-marketplace/redhat-operators-l5zzk" Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:58.784828 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccc24792-d90d-4d64-b056-945beaadc57f-catalog-content\") pod \"redhat-operators-l5zzk\" (UID: \"ccc24792-d90d-4d64-b056-945beaadc57f\") " pod="openshift-marketplace/redhat-operators-l5zzk" Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:58.785222 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccc24792-d90d-4d64-b056-945beaadc57f-utilities\") pod \"redhat-operators-l5zzk\" (UID: \"ccc24792-d90d-4d64-b056-945beaadc57f\") " pod="openshift-marketplace/redhat-operators-l5zzk" Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:58.785258 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccc24792-d90d-4d64-b056-945beaadc57f-catalog-content\") pod \"redhat-operators-l5zzk\" (UID: \"ccc24792-d90d-4d64-b056-945beaadc57f\") " pod="openshift-marketplace/redhat-operators-l5zzk" Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:58.804139 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ztr2\" (UniqueName: \"kubernetes.io/projected/ccc24792-d90d-4d64-b056-945beaadc57f-kube-api-access-2ztr2\") pod \"redhat-operators-l5zzk\" (UID: \"ccc24792-d90d-4d64-b056-945beaadc57f\") " pod="openshift-marketplace/redhat-operators-l5zzk" Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:58.816014 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l5zzk" Feb 26 09:48:58 crc kubenswrapper[4760]: I0226 09:48:58.845824 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g84hz"] Feb 26 09:48:58 crc kubenswrapper[4760]: W0226 09:48:58.850263 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8ca4cad_1954_4a07_aaae_3ec25fd7681b.slice/crio-38e7c5782ac4f1250a4201bf7bac9af194d7ec252e013200b2912a627676ce01 WatchSource:0}: Error finding container 38e7c5782ac4f1250a4201bf7bac9af194d7ec252e013200b2912a627676ce01: Status 404 returned error can't find the container with id 38e7c5782ac4f1250a4201bf7bac9af194d7ec252e013200b2912a627676ce01 Feb 26 09:48:59 crc kubenswrapper[4760]: I0226 09:48:59.181949 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l5zzk"] Feb 26 09:48:59 crc kubenswrapper[4760]: W0226 09:48:59.194543 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccc24792_d90d_4d64_b056_945beaadc57f.slice/crio-2f9a5432e8397d660218aed2f4e5b2e3d54d40aa3145865f473d9288c8225d2a WatchSource:0}: Error finding container 2f9a5432e8397d660218aed2f4e5b2e3d54d40aa3145865f473d9288c8225d2a: Status 404 returned error can't find the container with id 2f9a5432e8397d660218aed2f4e5b2e3d54d40aa3145865f473d9288c8225d2a Feb 26 09:48:59 crc kubenswrapper[4760]: I0226 09:48:59.313786 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h5tjf"] Feb 26 09:48:59 crc kubenswrapper[4760]: I0226 09:48:59.743532 4760 generic.go:334] "Generic (PLEG): container finished" podID="6f90c2ca-6715-40ab-838c-f4042cca4a49" containerID="e2b9c6312d366e4132682888b14e37b6e62b9af732a6e01afa61a4a761a4fb27" exitCode=0 Feb 26 09:48:59 crc kubenswrapper[4760]: I0226 09:48:59.743649 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5tjf" event={"ID":"6f90c2ca-6715-40ab-838c-f4042cca4a49","Type":"ContainerDied","Data":"e2b9c6312d366e4132682888b14e37b6e62b9af732a6e01afa61a4a761a4fb27"} Feb 26 09:48:59 crc kubenswrapper[4760]: I0226 09:48:59.744190 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5tjf" event={"ID":"6f90c2ca-6715-40ab-838c-f4042cca4a49","Type":"ContainerStarted","Data":"78d409f5dd95349221a4f628a1a62dfe12c0cca6d324bbf3731a3dd65434880c"} Feb 26 09:48:59 crc kubenswrapper[4760]: I0226 09:48:59.750360 4760 generic.go:334] "Generic (PLEG): container finished" podID="2d873fd3-41bb-4134-8b88-2ac414df58cf" containerID="685f658d8a7c236b347e8f4d3b6ea4ed1095f3af654293db92430916fe555af7" exitCode=0 Feb 26 09:48:59 crc kubenswrapper[4760]: I0226 09:48:59.750466 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4td4d" event={"ID":"2d873fd3-41bb-4134-8b88-2ac414df58cf","Type":"ContainerDied","Data":"685f658d8a7c236b347e8f4d3b6ea4ed1095f3af654293db92430916fe555af7"} Feb 26 09:48:59 crc kubenswrapper[4760]: I0226 09:48:59.753411 4760 generic.go:334] "Generic (PLEG): container finished" podID="ccc24792-d90d-4d64-b056-945beaadc57f" containerID="cf8d808a7e431f5382805314b295a648433ca82e5f90093fa5052ce2d3e1f429" exitCode=0 Feb 26 09:48:59 crc kubenswrapper[4760]: I0226 09:48:59.753469 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l5zzk" event={"ID":"ccc24792-d90d-4d64-b056-945beaadc57f","Type":"ContainerDied","Data":"cf8d808a7e431f5382805314b295a648433ca82e5f90093fa5052ce2d3e1f429"} Feb 26 09:48:59 crc kubenswrapper[4760]: I0226 09:48:59.753493 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l5zzk" event={"ID":"ccc24792-d90d-4d64-b056-945beaadc57f","Type":"ContainerStarted","Data":"2f9a5432e8397d660218aed2f4e5b2e3d54d40aa3145865f473d9288c8225d2a"} Feb 26 09:48:59 crc kubenswrapper[4760]: I0226 09:48:59.756730 4760 generic.go:334] "Generic (PLEG): container finished" podID="a8ca4cad-1954-4a07-aaae-3ec25fd7681b" containerID="085adc146158f3421756f34b97ab79a88c516c721b37e4a1b8078f899edf1a9a" exitCode=0 Feb 26 09:48:59 crc kubenswrapper[4760]: I0226 09:48:59.756767 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g84hz" event={"ID":"a8ca4cad-1954-4a07-aaae-3ec25fd7681b","Type":"ContainerDied","Data":"085adc146158f3421756f34b97ab79a88c516c721b37e4a1b8078f899edf1a9a"} Feb 26 09:48:59 crc kubenswrapper[4760]: I0226 09:48:59.756793 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g84hz" event={"ID":"a8ca4cad-1954-4a07-aaae-3ec25fd7681b","Type":"ContainerStarted","Data":"38e7c5782ac4f1250a4201bf7bac9af194d7ec252e013200b2912a627676ce01"} Feb 26 09:49:00 crc kubenswrapper[4760]: I0226 09:49:00.764774 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4td4d" event={"ID":"2d873fd3-41bb-4134-8b88-2ac414df58cf","Type":"ContainerStarted","Data":"cdfaa6f0bc8e9aa8a81fcbfc2f68a6e43a5ec8c82ade60ac8a9bdd92f09e9213"} Feb 26 09:49:01 crc kubenswrapper[4760]: I0226 09:49:01.773015 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5tjf" event={"ID":"6f90c2ca-6715-40ab-838c-f4042cca4a49","Type":"ContainerStarted","Data":"3d79768da38c4c2fbb41e5831de6c3bd08174beafae683dec3fec20f6de995da"} Feb 26 09:49:01 crc kubenswrapper[4760]: I0226 09:49:01.775931 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g84hz" event={"ID":"a8ca4cad-1954-4a07-aaae-3ec25fd7681b","Type":"ContainerStarted","Data":"e1b0f69a182cc1ddebac094f40faf399e2df6cb2add7aac55cc9e314418c215f"} Feb 26 09:49:01 crc kubenswrapper[4760]: I0226 09:49:01.778670 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l5zzk" event={"ID":"ccc24792-d90d-4d64-b056-945beaadc57f","Type":"ContainerStarted","Data":"b263cf928520b79e5041b93183def314d3b1fd28fd689e93d8b58b9c889e28a3"} Feb 26 09:49:01 crc kubenswrapper[4760]: I0226 09:49:01.839852 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4td4d" podStartSLOduration=4.050097962 podStartE2EDuration="6.839831539s" podCreationTimestamp="2026-02-26 09:48:55 +0000 UTC" firstStartedPulling="2026-02-26 09:48:57.733700594 +0000 UTC m=+382.867646087" lastFinishedPulling="2026-02-26 09:49:00.523434171 +0000 UTC m=+385.657379664" observedRunningTime="2026-02-26 09:49:01.838918407 +0000 UTC m=+386.972863900" watchObservedRunningTime="2026-02-26 09:49:01.839831539 +0000 UTC m=+386.973777032" Feb 26 09:49:02 crc kubenswrapper[4760]: I0226 09:49:02.787733 4760 generic.go:334] "Generic (PLEG): container finished" podID="a8ca4cad-1954-4a07-aaae-3ec25fd7681b" containerID="e1b0f69a182cc1ddebac094f40faf399e2df6cb2add7aac55cc9e314418c215f" exitCode=0 Feb 26 09:49:02 crc kubenswrapper[4760]: I0226 09:49:02.787830 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g84hz" event={"ID":"a8ca4cad-1954-4a07-aaae-3ec25fd7681b","Type":"ContainerDied","Data":"e1b0f69a182cc1ddebac094f40faf399e2df6cb2add7aac55cc9e314418c215f"} Feb 26 09:49:02 crc kubenswrapper[4760]: I0226 09:49:02.789961 4760 generic.go:334] "Generic (PLEG): container finished" podID="ccc24792-d90d-4d64-b056-945beaadc57f" containerID="b263cf928520b79e5041b93183def314d3b1fd28fd689e93d8b58b9c889e28a3" exitCode=0 Feb 26 09:49:02 crc kubenswrapper[4760]: I0226 09:49:02.789995 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l5zzk" event={"ID":"ccc24792-d90d-4d64-b056-945beaadc57f","Type":"ContainerDied","Data":"b263cf928520b79e5041b93183def314d3b1fd28fd689e93d8b58b9c889e28a3"} Feb 26 09:49:02 crc kubenswrapper[4760]: I0226 09:49:02.791845 4760 generic.go:334] "Generic (PLEG): container finished" podID="6f90c2ca-6715-40ab-838c-f4042cca4a49" containerID="3d79768da38c4c2fbb41e5831de6c3bd08174beafae683dec3fec20f6de995da" exitCode=0 Feb 26 09:49:02 crc kubenswrapper[4760]: I0226 09:49:02.791870 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5tjf" event={"ID":"6f90c2ca-6715-40ab-838c-f4042cca4a49","Type":"ContainerDied","Data":"3d79768da38c4c2fbb41e5831de6c3bd08174beafae683dec3fec20f6de995da"} Feb 26 09:49:04 crc kubenswrapper[4760]: I0226 09:49:04.804186 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5tjf" event={"ID":"6f90c2ca-6715-40ab-838c-f4042cca4a49","Type":"ContainerStarted","Data":"57c524b02ac089edff930b94542fbab567cee9266234eed57c450e68ba34c985"} Feb 26 09:49:04 crc kubenswrapper[4760]: I0226 09:49:04.806437 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l5zzk" event={"ID":"ccc24792-d90d-4d64-b056-945beaadc57f","Type":"ContainerStarted","Data":"af79a7eb355b7d1eaf0c5cd5b78acb061f5631e5190a6fdb3607ca7cc86efd49"} Feb 26 09:49:04 crc kubenswrapper[4760]: I0226 09:49:04.831958 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h5tjf" podStartSLOduration=2.921622989 podStartE2EDuration="6.831941568s" podCreationTimestamp="2026-02-26 09:48:58 +0000 UTC" firstStartedPulling="2026-02-26 09:48:59.765365374 +0000 UTC m=+384.899310867" lastFinishedPulling="2026-02-26 09:49:03.675683953 +0000 UTC m=+388.809629446" observedRunningTime="2026-02-26 09:49:04.828701022 +0000 UTC m=+389.962646515" watchObservedRunningTime="2026-02-26 09:49:04.831941568 +0000 UTC m=+389.965887061" Feb 26 09:49:05 crc kubenswrapper[4760]: I0226 09:49:05.263620 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-vk6cw" Feb 26 09:49:05 crc kubenswrapper[4760]: I0226 09:49:05.288059 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l5zzk" podStartSLOduration=3.58428609 podStartE2EDuration="7.288038676s" podCreationTimestamp="2026-02-26 09:48:58 +0000 UTC" firstStartedPulling="2026-02-26 09:48:59.754858186 +0000 UTC m=+384.888803689" lastFinishedPulling="2026-02-26 09:49:03.458610772 +0000 UTC m=+388.592556275" observedRunningTime="2026-02-26 09:49:04.858721172 +0000 UTC m=+389.992666665" watchObservedRunningTime="2026-02-26 09:49:05.288038676 +0000 UTC m=+390.421984159" Feb 26 09:49:05 crc kubenswrapper[4760]: I0226 09:49:05.331762 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9fjgn"] Feb 26 09:49:05 crc kubenswrapper[4760]: I0226 09:49:05.814079 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g84hz" event={"ID":"a8ca4cad-1954-4a07-aaae-3ec25fd7681b","Type":"ContainerStarted","Data":"971c7834d89d5cb5aeef040bf8678149a3f50baa9efbde1e9c7c117d3d172ddc"} Feb 26 09:49:05 crc kubenswrapper[4760]: I0226 09:49:05.834704 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g84hz" podStartSLOduration=5.489023463 podStartE2EDuration="9.834686243s" podCreationTimestamp="2026-02-26 09:48:56 +0000 UTC" firstStartedPulling="2026-02-26 09:48:59.75787907 +0000 UTC m=+384.891824563" lastFinishedPulling="2026-02-26 09:49:04.10354185 +0000 UTC m=+389.237487343" observedRunningTime="2026-02-26 09:49:05.833160229 +0000 UTC m=+390.967105722" watchObservedRunningTime="2026-02-26 09:49:05.834686243 +0000 UTC m=+390.968631746" Feb 26 09:49:06 crc kubenswrapper[4760]: I0226 09:49:06.227788 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4td4d" Feb 26 09:49:06 crc kubenswrapper[4760]: I0226 09:49:06.227851 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4td4d" Feb 26 09:49:06 crc kubenswrapper[4760]: I0226 09:49:06.277607 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4td4d" Feb 26 09:49:06 crc kubenswrapper[4760]: I0226 09:49:06.412606 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g84hz" Feb 26 09:49:06 crc kubenswrapper[4760]: I0226 09:49:06.413027 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g84hz" Feb 26 09:49:06 crc kubenswrapper[4760]: I0226 09:49:06.868858 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4td4d" Feb 26 09:49:07 crc kubenswrapper[4760]: I0226 09:49:07.463024 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-g84hz" podUID="a8ca4cad-1954-4a07-aaae-3ec25fd7681b" containerName="registry-server" probeResult="failure" output=< Feb 26 09:49:07 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Feb 26 09:49:07 crc kubenswrapper[4760]: > Feb 26 09:49:08 crc kubenswrapper[4760]: I0226 09:49:08.625064 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h5tjf" Feb 26 09:49:08 crc kubenswrapper[4760]: I0226 09:49:08.625147 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h5tjf" Feb 26 09:49:08 crc kubenswrapper[4760]: I0226 09:49:08.667044 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h5tjf" Feb 26 09:49:08 crc kubenswrapper[4760]: I0226 09:49:08.816994 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l5zzk" Feb 26 09:49:08 crc kubenswrapper[4760]: I0226 09:49:08.817055 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l5zzk" Feb 26 09:49:08 crc kubenswrapper[4760]: I0226 09:49:08.876344 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h5tjf" Feb 26 09:49:09 crc kubenswrapper[4760]: I0226 09:49:09.854274 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l5zzk" podUID="ccc24792-d90d-4d64-b056-945beaadc57f" containerName="registry-server" probeResult="failure" output=< Feb 26 09:49:09 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Feb 26 09:49:09 crc kubenswrapper[4760]: > Feb 26 09:49:16 crc kubenswrapper[4760]: I0226 09:49:16.450537 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g84hz" Feb 26 09:49:16 crc kubenswrapper[4760]: I0226 09:49:16.488005 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g84hz" Feb 26 09:49:16 crc kubenswrapper[4760]: I0226 09:49:16.640265 4760 patch_prober.go:28] interesting pod/machine-config-daemon-2fsxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:49:16 crc kubenswrapper[4760]: I0226 09:49:16.640312 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" podUID="62f749b1-23a5-43f1-8568-b98b688944fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:49:18 crc kubenswrapper[4760]: I0226 09:49:18.856739 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l5zzk" Feb 26 09:49:18 crc kubenswrapper[4760]: I0226 09:49:18.903320 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l5zzk" Feb 26 09:49:20 crc kubenswrapper[4760]: I0226 09:49:20.784931 4760 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod6ee6a724-49ab-489e-84b5-cc2f96c89dc2"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod6ee6a724-49ab-489e-84b5-cc2f96c89dc2] : Timed out while waiting for systemd to remove kubepods-burstable-pod6ee6a724_49ab_489e_84b5_cc2f96c89dc2.slice" Feb 26 09:49:20 crc kubenswrapper[4760]: I0226 09:49:20.812313 4760 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod919bb2ab-9fbf-4a58-835e-8348eebaf093"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod919bb2ab-9fbf-4a58-835e-8348eebaf093] : Timed out while waiting for systemd to remove kubepods-burstable-pod919bb2ab_9fbf_4a58_835e_8348eebaf093.slice" Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.380247 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" podUID="75bd609c-9135-4d9a-b974-a1b026ac6598" containerName="registry" containerID="cri-o://6fc7d3df41899b67f2173969149f8f9e28db11efd9039363f6d0b44901a78c17" gracePeriod=30 Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.741450 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.860229 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt288\" (UniqueName: \"kubernetes.io/projected/75bd609c-9135-4d9a-b974-a1b026ac6598-kube-api-access-wt288\") pod \"75bd609c-9135-4d9a-b974-a1b026ac6598\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.860320 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75bd609c-9135-4d9a-b974-a1b026ac6598-trusted-ca\") pod \"75bd609c-9135-4d9a-b974-a1b026ac6598\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.860347 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75bd609c-9135-4d9a-b974-a1b026ac6598-registry-certificates\") pod \"75bd609c-9135-4d9a-b974-a1b026ac6598\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.860370 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75bd609c-9135-4d9a-b974-a1b026ac6598-ca-trust-extracted\") pod \"75bd609c-9135-4d9a-b974-a1b026ac6598\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.860469 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75bd609c-9135-4d9a-b974-a1b026ac6598-bound-sa-token\") pod \"75bd609c-9135-4d9a-b974-a1b026ac6598\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.860518 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75bd609c-9135-4d9a-b974-a1b026ac6598-installation-pull-secrets\") pod \"75bd609c-9135-4d9a-b974-a1b026ac6598\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.860536 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75bd609c-9135-4d9a-b974-a1b026ac6598-registry-tls\") pod \"75bd609c-9135-4d9a-b974-a1b026ac6598\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.861391 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"75bd609c-9135-4d9a-b974-a1b026ac6598\" (UID: \"75bd609c-9135-4d9a-b974-a1b026ac6598\") " Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.861234 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75bd609c-9135-4d9a-b974-a1b026ac6598-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "75bd609c-9135-4d9a-b974-a1b026ac6598" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.861510 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75bd609c-9135-4d9a-b974-a1b026ac6598-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "75bd609c-9135-4d9a-b974-a1b026ac6598" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.861863 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75bd609c-9135-4d9a-b974-a1b026ac6598-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.861889 4760 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75bd609c-9135-4d9a-b974-a1b026ac6598-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.865539 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75bd609c-9135-4d9a-b974-a1b026ac6598-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "75bd609c-9135-4d9a-b974-a1b026ac6598" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.866042 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75bd609c-9135-4d9a-b974-a1b026ac6598-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "75bd609c-9135-4d9a-b974-a1b026ac6598" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.866285 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75bd609c-9135-4d9a-b974-a1b026ac6598-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "75bd609c-9135-4d9a-b974-a1b026ac6598" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.868629 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75bd609c-9135-4d9a-b974-a1b026ac6598-kube-api-access-wt288" (OuterVolumeSpecName: "kube-api-access-wt288") pod "75bd609c-9135-4d9a-b974-a1b026ac6598" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598"). InnerVolumeSpecName "kube-api-access-wt288". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.877423 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "75bd609c-9135-4d9a-b974-a1b026ac6598" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.879437 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75bd609c-9135-4d9a-b974-a1b026ac6598-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "75bd609c-9135-4d9a-b974-a1b026ac6598" (UID: "75bd609c-9135-4d9a-b974-a1b026ac6598"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.958589 4760 generic.go:334] "Generic (PLEG): container finished" podID="75bd609c-9135-4d9a-b974-a1b026ac6598" containerID="6fc7d3df41899b67f2173969149f8f9e28db11efd9039363f6d0b44901a78c17" exitCode=0 Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.958637 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" event={"ID":"75bd609c-9135-4d9a-b974-a1b026ac6598","Type":"ContainerDied","Data":"6fc7d3df41899b67f2173969149f8f9e28db11efd9039363f6d0b44901a78c17"} Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.958664 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" event={"ID":"75bd609c-9135-4d9a-b974-a1b026ac6598","Type":"ContainerDied","Data":"d0a29cae3cc0ead3a0737a4a639c20f0191b60b9b8c322ec94c0fe94b846426a"} Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.958679 4760 scope.go:117] "RemoveContainer" containerID="6fc7d3df41899b67f2173969149f8f9e28db11efd9039363f6d0b44901a78c17" Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.958680 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9fjgn" Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.962398 4760 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75bd609c-9135-4d9a-b974-a1b026ac6598-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.962422 4760 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75bd609c-9135-4d9a-b974-a1b026ac6598-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.962436 4760 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75bd609c-9135-4d9a-b974-a1b026ac6598-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.962446 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt288\" (UniqueName: \"kubernetes.io/projected/75bd609c-9135-4d9a-b974-a1b026ac6598-kube-api-access-wt288\") on node \"crc\" DevicePath \"\"" Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.962455 4760 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75bd609c-9135-4d9a-b974-a1b026ac6598-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.976164 4760 scope.go:117] "RemoveContainer" containerID="6fc7d3df41899b67f2173969149f8f9e28db11efd9039363f6d0b44901a78c17" Feb 26 09:49:30 crc kubenswrapper[4760]: E0226 09:49:30.976625 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fc7d3df41899b67f2173969149f8f9e28db11efd9039363f6d0b44901a78c17\": container with ID starting with 6fc7d3df41899b67f2173969149f8f9e28db11efd9039363f6d0b44901a78c17 not found: ID does not exist" containerID="6fc7d3df41899b67f2173969149f8f9e28db11efd9039363f6d0b44901a78c17" Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.976669 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fc7d3df41899b67f2173969149f8f9e28db11efd9039363f6d0b44901a78c17"} err="failed to get container status \"6fc7d3df41899b67f2173969149f8f9e28db11efd9039363f6d0b44901a78c17\": rpc error: code = NotFound desc = could not find container \"6fc7d3df41899b67f2173969149f8f9e28db11efd9039363f6d0b44901a78c17\": container with ID starting with 6fc7d3df41899b67f2173969149f8f9e28db11efd9039363f6d0b44901a78c17 not found: ID does not exist" Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.992525 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9fjgn"] Feb 26 09:49:30 crc kubenswrapper[4760]: I0226 09:49:30.997071 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9fjgn"] Feb 26 09:49:32 crc kubenswrapper[4760]: I0226 09:49:32.587825 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75bd609c-9135-4d9a-b974-a1b026ac6598" path="/var/lib/kubelet/pods/75bd609c-9135-4d9a-b974-a1b026ac6598/volumes" Feb 26 09:49:46 crc kubenswrapper[4760]: I0226 09:49:46.639771 4760 patch_prober.go:28] interesting pod/machine-config-daemon-2fsxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:49:46 crc kubenswrapper[4760]: I0226 09:49:46.640330 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" podUID="62f749b1-23a5-43f1-8568-b98b688944fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:50:00 crc kubenswrapper[4760]: I0226 09:50:00.142612 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534990-xzm92"] Feb 26 09:50:00 crc kubenswrapper[4760]: E0226 09:50:00.143608 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75bd609c-9135-4d9a-b974-a1b026ac6598" containerName="registry" Feb 26 09:50:00 crc kubenswrapper[4760]: I0226 09:50:00.143632 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="75bd609c-9135-4d9a-b974-a1b026ac6598" containerName="registry" Feb 26 09:50:00 crc kubenswrapper[4760]: I0226 09:50:00.143800 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="75bd609c-9135-4d9a-b974-a1b026ac6598" containerName="registry" Feb 26 09:50:00 crc kubenswrapper[4760]: I0226 09:50:00.144320 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534990-xzm92" Feb 26 09:50:00 crc kubenswrapper[4760]: I0226 09:50:00.147853 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:50:00 crc kubenswrapper[4760]: I0226 09:50:00.148486 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:50:00 crc kubenswrapper[4760]: I0226 09:50:00.148906 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jn6zk" Feb 26 09:50:00 crc kubenswrapper[4760]: I0226 09:50:00.155415 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534990-xzm92"] Feb 26 09:50:00 crc kubenswrapper[4760]: I0226 09:50:00.276053 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8q9p\" (UniqueName: \"kubernetes.io/projected/9267ef8c-141e-43d3-bc47-f1cc2d4adf0e-kube-api-access-p8q9p\") pod \"auto-csr-approver-29534990-xzm92\" (UID: \"9267ef8c-141e-43d3-bc47-f1cc2d4adf0e\") " pod="openshift-infra/auto-csr-approver-29534990-xzm92" Feb 26 09:50:00 crc kubenswrapper[4760]: I0226 09:50:00.378232 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8q9p\" (UniqueName: \"kubernetes.io/projected/9267ef8c-141e-43d3-bc47-f1cc2d4adf0e-kube-api-access-p8q9p\") pod \"auto-csr-approver-29534990-xzm92\" (UID: \"9267ef8c-141e-43d3-bc47-f1cc2d4adf0e\") " pod="openshift-infra/auto-csr-approver-29534990-xzm92" Feb 26 09:50:00 crc kubenswrapper[4760]: I0226 09:50:00.401057 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8q9p\" (UniqueName: \"kubernetes.io/projected/9267ef8c-141e-43d3-bc47-f1cc2d4adf0e-kube-api-access-p8q9p\") pod \"auto-csr-approver-29534990-xzm92\" (UID: \"9267ef8c-141e-43d3-bc47-f1cc2d4adf0e\") " pod="openshift-infra/auto-csr-approver-29534990-xzm92" Feb 26 09:50:00 crc kubenswrapper[4760]: I0226 09:50:00.470426 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534990-xzm92" Feb 26 09:50:00 crc kubenswrapper[4760]: I0226 09:50:00.872718 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534990-xzm92"] Feb 26 09:50:00 crc kubenswrapper[4760]: I0226 09:50:00.885979 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 09:50:01 crc kubenswrapper[4760]: I0226 09:50:01.150411 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534990-xzm92" event={"ID":"9267ef8c-141e-43d3-bc47-f1cc2d4adf0e","Type":"ContainerStarted","Data":"9961f0d5ed2f57e8d2ccbb71e75aad672d5cdc2301dc74ef7580abdca9649b96"} Feb 26 09:50:03 crc kubenswrapper[4760]: I0226 09:50:03.163869 4760 generic.go:334] "Generic (PLEG): container finished" podID="9267ef8c-141e-43d3-bc47-f1cc2d4adf0e" containerID="b4ba00a9a6c1dd11f4cb02b40a3000bb250141974a9d8b6c8077a43c5f96d3bc" exitCode=0 Feb 26 09:50:03 crc kubenswrapper[4760]: I0226 09:50:03.163973 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534990-xzm92" event={"ID":"9267ef8c-141e-43d3-bc47-f1cc2d4adf0e","Type":"ContainerDied","Data":"b4ba00a9a6c1dd11f4cb02b40a3000bb250141974a9d8b6c8077a43c5f96d3bc"} Feb 26 09:50:04 crc kubenswrapper[4760]: I0226 09:50:04.422414 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534990-xzm92" Feb 26 09:50:04 crc kubenswrapper[4760]: I0226 09:50:04.470416 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8q9p\" (UniqueName: \"kubernetes.io/projected/9267ef8c-141e-43d3-bc47-f1cc2d4adf0e-kube-api-access-p8q9p\") pod \"9267ef8c-141e-43d3-bc47-f1cc2d4adf0e\" (UID: \"9267ef8c-141e-43d3-bc47-f1cc2d4adf0e\") " Feb 26 09:50:04 crc kubenswrapper[4760]: I0226 09:50:04.486503 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9267ef8c-141e-43d3-bc47-f1cc2d4adf0e-kube-api-access-p8q9p" (OuterVolumeSpecName: "kube-api-access-p8q9p") pod "9267ef8c-141e-43d3-bc47-f1cc2d4adf0e" (UID: "9267ef8c-141e-43d3-bc47-f1cc2d4adf0e"). InnerVolumeSpecName "kube-api-access-p8q9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:50:04 crc kubenswrapper[4760]: I0226 09:50:04.571911 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8q9p\" (UniqueName: \"kubernetes.io/projected/9267ef8c-141e-43d3-bc47-f1cc2d4adf0e-kube-api-access-p8q9p\") on node \"crc\" DevicePath \"\"" Feb 26 09:50:05 crc kubenswrapper[4760]: I0226 09:50:05.183705 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534990-xzm92" event={"ID":"9267ef8c-141e-43d3-bc47-f1cc2d4adf0e","Type":"ContainerDied","Data":"9961f0d5ed2f57e8d2ccbb71e75aad672d5cdc2301dc74ef7580abdca9649b96"} Feb 26 09:50:05 crc kubenswrapper[4760]: I0226 09:50:05.184065 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9961f0d5ed2f57e8d2ccbb71e75aad672d5cdc2301dc74ef7580abdca9649b96" Feb 26 09:50:05 crc kubenswrapper[4760]: I0226 09:50:05.183921 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534990-xzm92" Feb 26 09:50:16 crc kubenswrapper[4760]: I0226 09:50:16.640410 4760 patch_prober.go:28] interesting pod/machine-config-daemon-2fsxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:50:16 crc kubenswrapper[4760]: I0226 09:50:16.641012 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" podUID="62f749b1-23a5-43f1-8568-b98b688944fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:50:16 crc kubenswrapper[4760]: I0226 09:50:16.641067 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" Feb 26 09:50:16 crc kubenswrapper[4760]: I0226 09:50:16.641629 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e8a6b715fa4c1ecb177b72a20cf5ceb53a06a6669ca4244b7787f46455bad25b"} pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 09:50:16 crc kubenswrapper[4760]: I0226 09:50:16.641692 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" podUID="62f749b1-23a5-43f1-8568-b98b688944fc" containerName="machine-config-daemon" containerID="cri-o://e8a6b715fa4c1ecb177b72a20cf5ceb53a06a6669ca4244b7787f46455bad25b" gracePeriod=600 Feb 26 09:50:17 crc kubenswrapper[4760]: I0226 09:50:17.248816 4760 generic.go:334] "Generic (PLEG): container finished" podID="62f749b1-23a5-43f1-8568-b98b688944fc" containerID="e8a6b715fa4c1ecb177b72a20cf5ceb53a06a6669ca4244b7787f46455bad25b" exitCode=0 Feb 26 09:50:17 crc kubenswrapper[4760]: I0226 09:50:17.248893 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" event={"ID":"62f749b1-23a5-43f1-8568-b98b688944fc","Type":"ContainerDied","Data":"e8a6b715fa4c1ecb177b72a20cf5ceb53a06a6669ca4244b7787f46455bad25b"} Feb 26 09:50:17 crc kubenswrapper[4760]: I0226 09:50:17.249472 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" event={"ID":"62f749b1-23a5-43f1-8568-b98b688944fc","Type":"ContainerStarted","Data":"562470b16b37fa9a0abc2911e7b8101b0d6cbaced511e75ecaafe1ba8cbb149d"} Feb 26 09:50:17 crc kubenswrapper[4760]: I0226 09:50:17.249494 4760 scope.go:117] "RemoveContainer" containerID="f4efbe79637d17378d1e3c83568f1cb588976a61342df5089c0211e4fb3d69b9" Feb 26 09:50:58 crc kubenswrapper[4760]: I0226 09:50:58.001533 4760 scope.go:117] "RemoveContainer" containerID="a30925b264dc57723578def0354c1bf32084e4c69b273733b8b34f21b6166159" Feb 26 09:52:00 crc kubenswrapper[4760]: I0226 09:52:00.141321 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534992-2hknm"] Feb 26 09:52:00 crc kubenswrapper[4760]: E0226 09:52:00.142131 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9267ef8c-141e-43d3-bc47-f1cc2d4adf0e" containerName="oc" Feb 26 09:52:00 crc kubenswrapper[4760]: I0226 09:52:00.142148 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9267ef8c-141e-43d3-bc47-f1cc2d4adf0e" containerName="oc" Feb 26 09:52:00 crc kubenswrapper[4760]: I0226 09:52:00.142269 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9267ef8c-141e-43d3-bc47-f1cc2d4adf0e" containerName="oc" Feb 26 09:52:00 crc kubenswrapper[4760]: I0226 09:52:00.142826 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534992-2hknm" Feb 26 09:52:00 crc kubenswrapper[4760]: I0226 09:52:00.148024 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jn6zk" Feb 26 09:52:00 crc kubenswrapper[4760]: I0226 09:52:00.148417 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:52:00 crc kubenswrapper[4760]: I0226 09:52:00.148682 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:52:00 crc kubenswrapper[4760]: I0226 09:52:00.152376 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534992-2hknm"] Feb 26 09:52:00 crc kubenswrapper[4760]: I0226 09:52:00.263029 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xrnf\" (UniqueName: \"kubernetes.io/projected/3ea07dae-3726-4768-93a2-72e58833dc34-kube-api-access-5xrnf\") pod \"auto-csr-approver-29534992-2hknm\" (UID: \"3ea07dae-3726-4768-93a2-72e58833dc34\") " pod="openshift-infra/auto-csr-approver-29534992-2hknm" Feb 26 09:52:00 crc kubenswrapper[4760]: I0226 09:52:00.364190 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xrnf\" (UniqueName: \"kubernetes.io/projected/3ea07dae-3726-4768-93a2-72e58833dc34-kube-api-access-5xrnf\") pod \"auto-csr-approver-29534992-2hknm\" (UID: \"3ea07dae-3726-4768-93a2-72e58833dc34\") " pod="openshift-infra/auto-csr-approver-29534992-2hknm" Feb 26 09:52:00 crc kubenswrapper[4760]: I0226 09:52:00.384927 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xrnf\" (UniqueName: \"kubernetes.io/projected/3ea07dae-3726-4768-93a2-72e58833dc34-kube-api-access-5xrnf\") pod \"auto-csr-approver-29534992-2hknm\" (UID: \"3ea07dae-3726-4768-93a2-72e58833dc34\") " pod="openshift-infra/auto-csr-approver-29534992-2hknm" Feb 26 09:52:00 crc kubenswrapper[4760]: I0226 09:52:00.466389 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534992-2hknm" Feb 26 09:52:00 crc kubenswrapper[4760]: I0226 09:52:00.782250 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534992-2hknm"] Feb 26 09:52:00 crc kubenswrapper[4760]: I0226 09:52:00.888876 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534992-2hknm" event={"ID":"3ea07dae-3726-4768-93a2-72e58833dc34","Type":"ContainerStarted","Data":"d1c9cc7e2027db5994e2165583eea52775f147f1fe82f0cbb5a03df28575d9aa"} Feb 26 09:52:02 crc kubenswrapper[4760]: I0226 09:52:02.934777 4760 generic.go:334] "Generic (PLEG): container finished" podID="3ea07dae-3726-4768-93a2-72e58833dc34" containerID="1eae3c33285e2cbfb854e12012300d84a86df09a5dfb534208119b0d12bcae05" exitCode=0 Feb 26 09:52:02 crc kubenswrapper[4760]: I0226 09:52:02.935045 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534992-2hknm" event={"ID":"3ea07dae-3726-4768-93a2-72e58833dc34","Type":"ContainerDied","Data":"1eae3c33285e2cbfb854e12012300d84a86df09a5dfb534208119b0d12bcae05"} Feb 26 09:52:04 crc kubenswrapper[4760]: I0226 09:52:04.292749 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534992-2hknm" Feb 26 09:52:04 crc kubenswrapper[4760]: I0226 09:52:04.395697 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xrnf\" (UniqueName: \"kubernetes.io/projected/3ea07dae-3726-4768-93a2-72e58833dc34-kube-api-access-5xrnf\") pod \"3ea07dae-3726-4768-93a2-72e58833dc34\" (UID: \"3ea07dae-3726-4768-93a2-72e58833dc34\") " Feb 26 09:52:04 crc kubenswrapper[4760]: I0226 09:52:04.401438 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ea07dae-3726-4768-93a2-72e58833dc34-kube-api-access-5xrnf" (OuterVolumeSpecName: "kube-api-access-5xrnf") pod "3ea07dae-3726-4768-93a2-72e58833dc34" (UID: "3ea07dae-3726-4768-93a2-72e58833dc34"). InnerVolumeSpecName "kube-api-access-5xrnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:52:04 crc kubenswrapper[4760]: I0226 09:52:04.496917 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xrnf\" (UniqueName: \"kubernetes.io/projected/3ea07dae-3726-4768-93a2-72e58833dc34-kube-api-access-5xrnf\") on node \"crc\" DevicePath \"\"" Feb 26 09:52:04 crc kubenswrapper[4760]: I0226 09:52:04.950182 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534992-2hknm" event={"ID":"3ea07dae-3726-4768-93a2-72e58833dc34","Type":"ContainerDied","Data":"d1c9cc7e2027db5994e2165583eea52775f147f1fe82f0cbb5a03df28575d9aa"} Feb 26 09:52:04 crc kubenswrapper[4760]: I0226 09:52:04.950669 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1c9cc7e2027db5994e2165583eea52775f147f1fe82f0cbb5a03df28575d9aa" Feb 26 09:52:04 crc kubenswrapper[4760]: I0226 09:52:04.950264 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534992-2hknm" Feb 26 09:52:05 crc kubenswrapper[4760]: I0226 09:52:05.384203 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534986-jrj4w"] Feb 26 09:52:05 crc kubenswrapper[4760]: I0226 09:52:05.390968 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534986-jrj4w"] Feb 26 09:52:06 crc kubenswrapper[4760]: I0226 09:52:06.584191 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28" path="/var/lib/kubelet/pods/dfedd4bd-9a2b-4a22-8c0a-c0d5f0e20e28/volumes" Feb 26 09:52:16 crc kubenswrapper[4760]: I0226 09:52:16.639744 4760 patch_prober.go:28] interesting pod/machine-config-daemon-2fsxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:52:16 crc kubenswrapper[4760]: I0226 09:52:16.640339 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" podUID="62f749b1-23a5-43f1-8568-b98b688944fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:52:46 crc kubenswrapper[4760]: I0226 09:52:46.640444 4760 patch_prober.go:28] interesting pod/machine-config-daemon-2fsxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:52:46 crc kubenswrapper[4760]: I0226 09:52:46.641379 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" podUID="62f749b1-23a5-43f1-8568-b98b688944fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:52:58 crc kubenswrapper[4760]: I0226 09:52:58.057119 4760 scope.go:117] "RemoveContainer" containerID="cca58d2544314ed47085ddbb220223f9ff63b73a6c043d5baaca8e4c925da0a5" Feb 26 09:53:16 crc kubenswrapper[4760]: I0226 09:53:16.640420 4760 patch_prober.go:28] interesting pod/machine-config-daemon-2fsxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 09:53:16 crc kubenswrapper[4760]: I0226 09:53:16.641076 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" podUID="62f749b1-23a5-43f1-8568-b98b688944fc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 09:53:16 crc kubenswrapper[4760]: I0226 09:53:16.641153 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" Feb 26 09:53:16 crc kubenswrapper[4760]: I0226 09:53:16.642086 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"562470b16b37fa9a0abc2911e7b8101b0d6cbaced511e75ecaafe1ba8cbb149d"} pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 09:53:16 crc kubenswrapper[4760]: I0226 09:53:16.642196 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" podUID="62f749b1-23a5-43f1-8568-b98b688944fc" containerName="machine-config-daemon" containerID="cri-o://562470b16b37fa9a0abc2911e7b8101b0d6cbaced511e75ecaafe1ba8cbb149d" gracePeriod=600 Feb 26 09:53:17 crc kubenswrapper[4760]: I0226 09:53:17.705396 4760 generic.go:334] "Generic (PLEG): container finished" podID="62f749b1-23a5-43f1-8568-b98b688944fc" containerID="562470b16b37fa9a0abc2911e7b8101b0d6cbaced511e75ecaafe1ba8cbb149d" exitCode=0 Feb 26 09:53:17 crc kubenswrapper[4760]: I0226 09:53:17.705511 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" event={"ID":"62f749b1-23a5-43f1-8568-b98b688944fc","Type":"ContainerDied","Data":"562470b16b37fa9a0abc2911e7b8101b0d6cbaced511e75ecaafe1ba8cbb149d"} Feb 26 09:53:17 crc kubenswrapper[4760]: I0226 09:53:17.705731 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2fsxp" event={"ID":"62f749b1-23a5-43f1-8568-b98b688944fc","Type":"ContainerStarted","Data":"cea5476e7d3e5692ad6f56f92021d4138a2a530729875fbb95e1c7421fe262a8"} Feb 26 09:53:17 crc kubenswrapper[4760]: I0226 09:53:17.705753 4760 scope.go:117] "RemoveContainer" containerID="e8a6b715fa4c1ecb177b72a20cf5ceb53a06a6669ca4244b7787f46455bad25b" Feb 26 09:54:00 crc kubenswrapper[4760]: I0226 09:54:00.149405 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29534994-bctr4"] Feb 26 09:54:00 crc kubenswrapper[4760]: E0226 09:54:00.150106 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea07dae-3726-4768-93a2-72e58833dc34" containerName="oc" Feb 26 09:54:00 crc kubenswrapper[4760]: I0226 09:54:00.150119 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea07dae-3726-4768-93a2-72e58833dc34" containerName="oc" Feb 26 09:54:00 crc kubenswrapper[4760]: I0226 09:54:00.150230 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ea07dae-3726-4768-93a2-72e58833dc34" containerName="oc" Feb 26 09:54:00 crc kubenswrapper[4760]: I0226 09:54:00.150705 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534994-bctr4" Feb 26 09:54:00 crc kubenswrapper[4760]: I0226 09:54:00.153470 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 09:54:00 crc kubenswrapper[4760]: I0226 09:54:00.154675 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534994-bctr4"] Feb 26 09:54:00 crc kubenswrapper[4760]: I0226 09:54:00.156233 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-jn6zk" Feb 26 09:54:00 crc kubenswrapper[4760]: I0226 09:54:00.156621 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 09:54:00 crc kubenswrapper[4760]: I0226 09:54:00.172936 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4tsd\" (UniqueName: \"kubernetes.io/projected/c99ac188-1243-442b-9bd9-e03e32aa5fb2-kube-api-access-t4tsd\") pod \"auto-csr-approver-29534994-bctr4\" (UID: \"c99ac188-1243-442b-9bd9-e03e32aa5fb2\") " pod="openshift-infra/auto-csr-approver-29534994-bctr4" Feb 26 09:54:00 crc kubenswrapper[4760]: I0226 09:54:00.274412 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4tsd\" (UniqueName: \"kubernetes.io/projected/c99ac188-1243-442b-9bd9-e03e32aa5fb2-kube-api-access-t4tsd\") pod \"auto-csr-approver-29534994-bctr4\" (UID: \"c99ac188-1243-442b-9bd9-e03e32aa5fb2\") " pod="openshift-infra/auto-csr-approver-29534994-bctr4" Feb 26 09:54:00 crc kubenswrapper[4760]: I0226 09:54:00.295264 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4tsd\" (UniqueName: \"kubernetes.io/projected/c99ac188-1243-442b-9bd9-e03e32aa5fb2-kube-api-access-t4tsd\") pod \"auto-csr-approver-29534994-bctr4\" (UID: \"c99ac188-1243-442b-9bd9-e03e32aa5fb2\") " pod="openshift-infra/auto-csr-approver-29534994-bctr4" Feb 26 09:54:00 crc kubenswrapper[4760]: I0226 09:54:00.472218 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534994-bctr4" Feb 26 09:54:00 crc kubenswrapper[4760]: I0226 09:54:00.696525 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29534994-bctr4"] Feb 26 09:54:01 crc kubenswrapper[4760]: I0226 09:54:01.252870 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534994-bctr4" event={"ID":"c99ac188-1243-442b-9bd9-e03e32aa5fb2","Type":"ContainerStarted","Data":"b7d80d4c4b3ba7fca5d793995c1c653d3b685c18631199176ac86fb4c6ddbef4"} Feb 26 09:54:02 crc kubenswrapper[4760]: I0226 09:54:02.260421 4760 generic.go:334] "Generic (PLEG): container finished" podID="c99ac188-1243-442b-9bd9-e03e32aa5fb2" containerID="4e5bcf96a32070745c0f53a3df8237eccf436d8381ca417bebae4ee54b5711af" exitCode=0 Feb 26 09:54:02 crc kubenswrapper[4760]: I0226 09:54:02.260646 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534994-bctr4" event={"ID":"c99ac188-1243-442b-9bd9-e03e32aa5fb2","Type":"ContainerDied","Data":"4e5bcf96a32070745c0f53a3df8237eccf436d8381ca417bebae4ee54b5711af"} Feb 26 09:54:03 crc kubenswrapper[4760]: I0226 09:54:03.537916 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534994-bctr4" Feb 26 09:54:03 crc kubenswrapper[4760]: I0226 09:54:03.722680 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4tsd\" (UniqueName: \"kubernetes.io/projected/c99ac188-1243-442b-9bd9-e03e32aa5fb2-kube-api-access-t4tsd\") pod \"c99ac188-1243-442b-9bd9-e03e32aa5fb2\" (UID: \"c99ac188-1243-442b-9bd9-e03e32aa5fb2\") " Feb 26 09:54:03 crc kubenswrapper[4760]: I0226 09:54:03.729751 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c99ac188-1243-442b-9bd9-e03e32aa5fb2-kube-api-access-t4tsd" (OuterVolumeSpecName: "kube-api-access-t4tsd") pod "c99ac188-1243-442b-9bd9-e03e32aa5fb2" (UID: "c99ac188-1243-442b-9bd9-e03e32aa5fb2"). InnerVolumeSpecName "kube-api-access-t4tsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 09:54:03 crc kubenswrapper[4760]: I0226 09:54:03.825784 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4tsd\" (UniqueName: \"kubernetes.io/projected/c99ac188-1243-442b-9bd9-e03e32aa5fb2-kube-api-access-t4tsd\") on node \"crc\" DevicePath \"\"" Feb 26 09:54:04 crc kubenswrapper[4760]: I0226 09:54:04.275795 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29534994-bctr4" event={"ID":"c99ac188-1243-442b-9bd9-e03e32aa5fb2","Type":"ContainerDied","Data":"b7d80d4c4b3ba7fca5d793995c1c653d3b685c18631199176ac86fb4c6ddbef4"} Feb 26 09:54:04 crc kubenswrapper[4760]: I0226 09:54:04.275877 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7d80d4c4b3ba7fca5d793995c1c653d3b685c18631199176ac86fb4c6ddbef4" Feb 26 09:54:04 crc kubenswrapper[4760]: I0226 09:54:04.275960 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29534994-bctr4" Feb 26 09:54:04 crc kubenswrapper[4760]: I0226 09:54:04.631372 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29534988-qlzhw"] Feb 26 09:54:04 crc kubenswrapper[4760]: I0226 09:54:04.634933 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29534988-qlzhw"] Feb 26 09:54:06 crc kubenswrapper[4760]: I0226 09:54:06.585524 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a8da2b9-3f31-4b08-bccc-57458d7ed615" path="/var/lib/kubelet/pods/2a8da2b9-3f31-4b08-bccc-57458d7ed615/volumes" Feb 26 09:54:42 crc kubenswrapper[4760]: I0226 09:54:42.906228 4760 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt"